text
stringlengths
100
500k
subset
stringclasses
4 values
A microplanning model to improve door-to-door health service delivery: the case of Seasonal Malaria Chemoprevention in Sub-Saharan African villages André Lin Ouédraogo ORCID: orcid.org/0000-0001-5880-55121, Julie Zhang2,3, Halidou Tinto4, Innocent Valéa4 & Edward A. Wenger1 BMC Health Services Research volume 20, Article number: 1128 (2020) Cite this article Malaria incidence has plateaued in Sub-Saharan Africa despite Seasonal Malaria Chemoprevention's (SMC) introduction. Community health workers (CHW) use a door-to-door delivery strategy to treat children with SMC drugs, but for SMC to be as effective as in clinical trials, coverage must be high over successive seasons. We developed and used a microplanning model that utilizes population raster to estimate population size, generates optimal households visit itinerary, and quantifies SMC coverage based on CHWs' time investment for treatment and walking. CHWs' performance under current SMC deployment mode was assessed using CHWs' tracking data and compared to microplanning in villages with varying demographics and geographies. Estimates showed that microplanning significantly reduces CHWs' walking distance by 25%, increases the number of visited households by 36% (p < 0.001) and increases SMC coverage by 21% from 37.3% under current SMC deployment mode up to 58.3% under microplanning (p < 0.001). Optimal visit itinerary alone increased SMC coverage up to 100% in small villages whereas in larger or hard-to-reach villages, filling the gap additionally needed an optimization of the CHW ratio. We estimate that for a pair of CHWs, the daily optimal number of visited children (assuming 8.5mn spent per child) and walking distance should not exceed 45 (95% CI 27–62) and 5 km (95% CI 3.2–6.2) respectively. Our work contributes to extend SMC coverage by 21–63% and may have broader applicability for other community health programs. Malaria remains the foremost health challenge in Sub-Saharan Africa [1]. Recent data showed that globally, progress in reducing malaria burden has stalled, especially in high-burden countries [1] urging the World Health Organization (WHO) to launch the country-led high burden to high impact (HBHI) approach. The goal of the HBHI is to bring the 11 highest burden countries, 10 in Sub-Saharan Africa plus India, back on track to achieve WHO Global Technical Strategy's milestone which aims to reduce incidence by at least 75% by 2025 [2]; but that is unlikely to succeed unless key burden reduction strategies such as seasonal malaria chemoprevention (SMC) are revisited to maximize impacts. Clinical trials reported that SMC prevent approximately 80% of malaria episodes among treated children [3,4,5]. At the global scale, modeling studies suggest that millions of malaria cases and thousands of deaths could be averted if SMC delivery on the ground was successful [6]. With regard to SMC deployment, studies comparing fixed-location versus door-to-door suggest the latter as most effective with respect to coverage [7,8,9]. After its recommendation by WHO in 2012, SMC was deployed in 2014, mostly in countries across the Sahel and Sahel sub-region where more than 60% of clinical malaria concentrate within 3-to-4 consecutive months. Nevertheless, recent data showed that SMC in its programmatic phase is failing as progress in reducing incidence has plateaued to date despite its introduction [1]. One likely reason SMC is not properly working under real-world conditions is to be associated to its poor delivery in the community. In Mali, the average of SMC coverage in 2016 as reported from seven surveys was 53% [10]. In Burkina Faso, post-campaign coverage estimates using SMC cards and parents' statements showed that only a small fraction of children (32%) received all SMC doses over four consecutive rounds [11]. However, for SMC to reach the desired cases reduction, we must see high coverages above 90–95% over successive seasons [6, 12]. Multiple logistic constraints and shortcomings in SMC deployment including CHW ratio per capita [11], excess time loss during treatment [5], and importantly missed households or settlements as reported during Polio vaccination [13,14,15] likely contribute to lower SMC coverage. A country's microplanning strategy can be a complex and difficult process leaving activities more often unoptimized for impact. Here we develop a microplanning model to predict CHW's performance during door-to-door health service delivery and review opportunities to optimize and standardize SMC deployment that may contribute leverage its potential in preventing malaria episodes and accelerate malaria burden reduction toward 2025. The model utilizes population raster data on demographics (family sizes, household geolocations) to assess treatment duration and door-to-door travel times allowing for quantification of the CHW's time investment that is convertible to actual SMC coverage and unmet needs. We then use the model to assess SMC coverage under its current deployment mode and to predict microplanning achievements in African remote villages. Study site Burkina Faso reported approximately 12 million of malaria cases and 4000 deaths in 2018 [1]. Malaria transmission is intense and seasonal [16, 17] and despite SMC, clinical malaria remains on the rise [1]. The health and demographic surveillance site (HDSS) of Nanoro in the centre west provides rich household survey data suitable for microplanning studies [18]. Three villages (Soaw, Rakolo, Mogdin) with different characteristics (geographic or demographics) were selected to test the potential of microplanning in optimizing SMC deployment. Population raster data were extracted to compare villages' characteristics and results are presented in the results' section. Census data were obtained from the HDSS and incidence data from national's District Health Information Software 2 platforms (DHIS2). Microplanning model To improve SMC door-to-door delivery, we used a salesman algorithm-based accessibility model (Fig. 1) to determine the optimal itinerary for CHWs to efficiently visit all households in each village. The model computes the shortest door-to-door visit itinerary using global positioning system (GPS) information of households and outputs travel distance and travel time as well as treatment duration. Households' GPS coordinates and family sizes were extracted from the population raster of each village. A 2015's population raster of Burkina Faso as provided by the Center for International Earth Science Information Network (CIESIN) and the Connectivity Lab at Facebook encapsulates values on number of individuals inside raster's pixels that can be processed to extract family sizes [19]. Fraction of under 5 children per household was subsequently derived from related-family size assuming that under 5 children make up approximately 18% of the total population [20]. Microplanning model design Generating household visit itinerary using the salesman algorithm We describe the traveling salesman problem (TSP) as a graph theory problem. Each household is thought of as a vertex and each vertex is connected by an edge, so our graphs is G = (V, E), where V is the set of vertices, and E is the set of edges. Each edge has an associated cost cij, which is the distance between the two households. Our goal is to find the shortest path from any starting vertex that passes through all other vertices without repeating. Unlike the classic TSP, we do not return to the starting household. Instead, we use the Held-Karp algorithm, a dynamic programming solution [21]. The idea is to compute optimal sub-paths. We compute table entries C(S, i, j) for each subset S ⊂ V, and i, j ∈ S, defined to be the length of the shortest path from vertex i to vertex j visiting each vertex in S exactly once (and no node outside of S). The algorithm computes C(S, i, j) for increasing number of vertices in set S, up to N, the total number of vertices. Let C({i, j}, i, j) = cij for all i ≠ j. For k = 3 to N, do For all sets S ⊂ V with k vertices, compute \( C\left(S,i,j\right)=\underset{l\in S\backslash \left\{i,j\right\}}{\min}\left[C\left(S\backslash \left\{j\right\},i,l\right)+{c}_{lj}\right] \) Return the optimal cost \( L=C\left(V,{n}_1,{n}_N\right)=\underset{i\ne j}{\min }C\left(V,i,j\right) \) We now can recover the path as follows: n1, nN are our starting and ending vertices respectively. Vertex nN − 1 is the unique vertex satisfying $$ \mathrm{C}\left(\mathrm{V},{\mathrm{n}}_1,{\mathrm{n}}_{\mathrm{N}}\right)=\mathrm{C}\left(\mathrm{V}\backslash \left\{{\mathrm{n}}_{\mathrm{N}}\right\},{\mathrm{n}}_1,{\mathrm{n}}_{\mathrm{N}-1}\right)+{\mathrm{c}}_{{\mathrm{n}}_{\mathrm{N}-1}{\mathrm{n}}_{\mathrm{N}}}. $$ If we have computed nN − 1, …, nj + 1, then vertex nj is the unique vertex satisfying $$ \mathrm{C}\left(\mathrm{V}\backslash \left\{{\mathrm{n}}_{\mathrm{N}},\dots, {\mathrm{n}}_{\mathrm{j}+2}\right\},{\mathrm{n}}_1,{\mathrm{n}}_{\mathrm{j}+1}\right)=\mathrm{C}\left(\mathrm{V}\backslash \left\{{\mathrm{n}}_{\mathrm{N}},\dots, {\mathrm{n}}_{\mathrm{j}+1}\right\},{\mathrm{n}}_1,{\mathrm{n}}_{\mathrm{j}}\right)+{\mathrm{c}}_{{\mathrm{n}}_{\mathrm{j}}{\mathrm{n}}_{\mathrm{j}+1.}} $$ We now have the whole path (n1, …, nN) along with optimal distance L = C(V, n1, nN). With the path linking household via GPS coordinates, we could estimate walking distance using Euclidian distance formula. Subdividing hard-to-reach areas using k-means clustering For hard-to-reach villages with accessibility constraints (e.g. rivers), the model first clusters households using the constrained K-Means algorithm before determining optimal itinerary and unmet needs for each cluster [22]. We give the mathematical formulas for the constrained K-Means problem. Let the dataset be D = {x1, …, xm}, where xi ∈ Rn. Let 1 ≤ k ≤ m be the number of clusters. We want to find cluster centers C1, . . , Ck such that the distance between each point xi and the nearest cluster center Ch is minimized under the condition that cluster number h must contain at least τh data points, where \( {\sum}_{h=1}^k{\tau}_h\le m \). If τh > 0, this forces clusters to be non-empty, and we can also choose τh such that all clusters have relatively the same number of data points. We let Ti, h ∈ {0, 1} denote the "selection variables" that indicate whether xi belongs to cluster number h. The constrained K-Means problem is as follows. $$ \underset{C,T}{\min }{\sum}_{i=1}^m{\sum}_{h=1}^k{T}_{i,h}\left(\frac{1}{2}{\left\Vert {x}_i-{C}_h\right\Vert}^2\right) $$ We can solve this iteratively. At iteration t, let C1, t, …, Ck, t be the cluster centers. We compute the cluster centers C1, t + 1, …, Ck, t + 1 at iteration t + 1 in 2 steps. Cluster assignment: let \( {\mathrm{T}}_{\mathrm{i},\mathrm{h}}^{\mathrm{t}} \) be a solution to the following linear program with Ch, t fixed $$ \underset{\mathrm{T}}{\min }{\sum}_{\mathrm{i}=1}^{\mathrm{m}}{\sum}_{\mathrm{h}=1}^{\mathrm{k}}{\mathrm{T}}_{\mathrm{i},\mathrm{h}}\left(\frac{1}{2}{\left\Vert {\mathrm{x}}_{\mathrm{i}}-{\mathrm{C}}_{\mathrm{h}}\right\Vert}^2\right) $$ $$ \mathrm{subject}\ \mathrm{to}\ {\sum}_{\mathrm{h}=1}^{\mathrm{k}}{\mathrm{T}}_{\mathrm{i},\mathrm{h}}=1,\mathrm{i}=1,\dots, \mathrm{m} $$ $$ \kern5.25em {\sum}_{\mathrm{i}=1}^{\mathrm{m}}{\mathrm{T}}_{\mathrm{i},\mathrm{h}}\ge {\uptau}_{\mathrm{h}},\mathrm{j}=1,\dots, \mathrm{k} $$ $$ \kern8.75em {\mathrm{T}}_{\mathrm{i},\mathrm{h}}\ge 0,\mathrm{i}=1,\dots, \mathrm{m},\mathrm{h}=1,\dots, \mathrm{k} $$ Update Ch, t + 1as follows. If \( {\sum}_{i=1}^m{T}_{i,h}^t=0 \), then no update is made: Ch, t + 1 = Ch, t. If \( {\sum}_{\mathrm{i}=1}^{\mathrm{m}}{\mathrm{T}}_{\mathrm{i},\mathrm{h}}^{\mathrm{t}}>0 \), then $$ {\mathrm{C}}_{\mathrm{h},\mathrm{t}+1}=\frac{\sum_{\mathrm{i}=1}^{\mathrm{m}}{\mathrm{T}}_{\mathrm{i},\mathrm{h}}^{\mathrm{t}}{\mathrm{x}}_{\mathrm{i}}}{\sum_{\mathrm{i}=1}^{\mathrm{m}}{\mathrm{T}}_{\mathrm{i},\mathrm{h}}^{\mathrm{t}}} $$ We terminate when Ch, t + 1 = Ch, t for all h. This algorithm is guaranteed to converge to a locally optimal solution. The constraints in the linear program in the cluster assignment step is equivalent to a Minimum Cost Flow (MCF) problem, a linear network optimization problem. SMC performance under current standard deployment Current standard SMC deployment refers to as a door-to-door delivery performed by a CHW whose geographical orientation and time management is solely based on the CHW's own perception. The number of CHWs for Rakolo during the 2016's SMC campaign was limited to two who were trained by one supervisor (health facility nurse). MAPs were not carried by CHWs although rough sketched paper's map was used by the supervisor to macro-plan SMC deployment across the health facility catchment area. SMC coverage was defined as follows: \( SMC\ Coverage={\sum}_{\mathrm{i}=1}^{\mathrm{T}}\frac{n_i}{N}\times 100 \) where T is campaign duration in days, n is number of treated and N the total number of children. Based on personal communications and on reports, we estimated at 12.5mn the average treatment duration per child ranging under 15mn to above 30mn in 63 to 22% of occasions respectively [23]. During the 2016's SMC campaign, CHWs service packages were loaded with GPS devices and unknowingly provided GPS-tracking itineraries in Rakolo. Walking distances were estimated using visited household coordinates and converted to travel times assuming a 20mn walk per km in the wet season [24, 25]. SMC performance under microplanning To predict SMC coverage under microplanning, the model assumes an initial number of 2 CHWs, daily working time of 8 h, and 4 days of campaign duration. Walking distances in optimized visit itineraries were converted to travel times [24, 25]. Based on current CHWs experiences, we assumed random draws of treatment duration for 1, 2 or 3 children per household as follows t ∼ U (10, 15); t ∼ U (15, 20) or t ∼ U (20, 25) respectively. Assuming a household of 3 children we estimated on average 8.5mn (25mn/3) per child as best optimal treatment duration. Predicted SMC coverage was computed as follows: \( SMC\ Coverage=\frac{\boldsymbol{T}}{\boldsymbol{t}}\times 100 \) where T is campaign duration in days and t is total of treatment and travel times in days (Fig. 1). We assessed CHWs' performances (SMC coverages) and unmet needs under current SMC deployment mode and two microplanning scenarios (A and B). Microplanning A consists in optimizing visits itinerary and time invested in treatment while for microplanning B, visits itinerary, time invested in treatment and number of CHWs are optimized. Comparison analyses of proportions of treated children and visited households between current SMC deployment and microplanning A or B were based on Chi2 test. Uncertainties around the optimal number for daily treated children and walking km were estimated as 95% Confidence Intervals using the t-distribution. Unmet needs for SMC performance maximization To maximize SMC performance, unmet needs were estimated by converting maximum time invested to reach 100% of SMC coverage into number of CHWs needed: Unmet needs (CHWs) = \( \frac{t}{T}\times 2 \) where T is campaign duration in days and t is total time invested for treatment and travel (Fig. 1). We chose to express unmet needs as supplementary CHWs instead of supplementary campaign days to reduce workforce burden (fatigue). Predicting population sizes using population raster Three villages with varying characteristics were selected to assess CHWs' performances and unmet needs under current SMC deployment mode and two microplanning scenarios (A and B). In Rakalo, households seem evenly spaced suggesting a uniform dispersion of the population (Fig. 2). Contrarily to Rakolo, households in Mogdin are unevenly spaced leading to a random dispersion of the population with longer distances to connect households (Fig. 2). In Soaw, households are distributed across water streams leading to clumped dispersion of the population suggesting that hard-to-reach scenarios should be accounted for while microplanning SMC delivery. We thus subdivided Soaw into three groups (A, B, C, Fig. 2) using k-means clustering. Predicted under 5 population sizes in Rakolo, Mogdin and Soaw using population raster were not significantly different (146, 324, 945) compared to census data (146, 331, 998 respectively; Fig. 3a). Study villages. Country and admin2 (Health District) MAPS were obtained form https://www.diva-gis.org/gdata. GPS coordinates of households were extracted from https://www.ciesin.columbia.edu/data/hrsl/ Population estimates and malaria burden in study villages Malaria incidence not impacted by current SMC deployment Routine incidence data of Soum's primary health facility catchment area was assumed from at least both villages of Soum and Bogdin (Fig. 1). Data reported by Soaw primary health facility were assumed from at least both villages of Soaw and Rakolo. Note that routine data outside catchment area are likely reported by our health facilities and conversely but it is difficult to estimate a differential incidence data for each village. Temporal malaria incidence increased or stalled in all three villages as from 2013 to 2017 suggesting that SMC deployment has likely not been effective to date (Fig. 3b, c). Similar incidence trends were observed from other health facilities supporting that SMC is likely not impactful in the entire region (Supplementary Figure 1). Impact of microplanning on SMC coverage CHWs' performance including number of visited households, number of treated children, walking distance and ultimately SMC coverage were estimated under current SMC deployment mode and compared to microplanning modes (Fig. 4a, b). Performances were predicted using population raster-based population estimates. Two types of microplanning (A, B), both assuming an optimal treatment duration of 8.5mn per child, were introduced to improve current SMC deployment. A microplanning A consisted of providing the CHW with a household visit itinerary plan which helps visualize the extent of the catchment area on a map with the shortest itinerary to visit all households. When microplanning A was predicted ineffective to cover 100% of households within the given campaign duration of 4 days, a microplanning B was introduced which consisted of optimizing (adjusting) the number of CHWs to timely cover all households and treatments. Performance of current SMC deployment mode (a) compared to microplanning (b, c) in Rakolo. Microplan A = Visits itinerary and time invested in treatment are optimized; Microplan B = Visits itinerary, time invested in treatment and number of CHWs are optimized Based on CHWs' tracking data we predicted 37% of SMC coverage (Fig. 4a) in Rakolo which is in line with previous studies [11]. A total of 136 households were predicted to be visited by CHWs when an optimal visit itinerary plan was introduced (microplanning A, Fig. 4b) which significantly reduced CHWs' walking distance by 25% (20 km to 15 km). This represents a significant increase by 36% (χ2 = 19.15, p < 0.001) in the number of visited households as compared to the CHWs' tracking data under current SMC deployment mode (87 households visited, Fig. 4a). The equivalent in SMC coverage was a significant increase by 21% (χ2 = 27.76, p < 0.001, Fig. 4) as the number of children treated increased from 37.3% (121/324) under current SMC deployment mode up to 58.3% (189/324) following the introduction of optimal visit itinerary plan. Maximization of SMC coverage to 100% within 4 days window required 2 additional CHWs as unmet need in the roll out of microplanning B to extend the reach of SMC treatment (Fig. 4c). The practicality of using a model-optimized visit itinerary plan for SMC deployment in Rakolo is illustrated in Fig. 4d. CHWs' tracking data were not available for Mogdin and Soaw and therefore SMC coverage under current SMC deployment was not assessed. Microplanning A was systematically applied to Mogdin and Soaw and replaced by a microplanning B if 100% of SMC coverage was not reached under the former. In Soaw, many children in both clusters A (277 children) and B (235 children) were missed under microplanning A (Table 1). Unmet needs were estimated to be 2.38 and 2.12 additional CHWs required for cluster A and B respectively (Fig. 5a, b). In cluster C, an optimal visit itinerary alone (microplanning A) was enough to 2 CHWs to complete treatment of 146 children alongside 20 km of walking (Fig. 5c). Similarly, the use of an optimal visit itinerary alone in Mogdin was enough to cover all 146 eligible children in less than 4 days (Fig. 5d). The practicality of using a model-optimized visit itinerary plan for SMC deployment in Soaw is illustrated in Supplementary Figure 2. Table 1 Performances of SMC under various deployment modes within 4 days of SMC campaign Performance of SMC microplanning in Soaw (a, b, c) and Mogdin (d). Microplan A = Visits itinerary and time invested in treatment are optimized; Microplan B = Visits itinerary, time invested in treatment and number of CHWs are optimized On average we estimated that for a pair of CHWs, the daily optimal number of children for treatment and walking distance should not exceed 45 (95% CI 27–62) and 5 km (95% CI 3.2–6.2) respectively. The present work identified opportunities to extend the reach of SMC treatment in rural African villages. Key inputs to SMC deployment including household visit itinerary, treatment duration and CHW ratio per capita were revisited under microplanning strategies. On average we estimated that for a pair of CHWs, the daily optimal average number of children for treatment and walking distance should not exceed 45 children and 5 km respectively over 4 days of campaign. We showed that optimal household visit itinerary reduces CHWs' walking distance by 25%, increases the number of visited households by 36% and increases SMC coverage by 21% (37% under current standard mode up to 58% under microplanning). SMC coverage could be maximized to 100% only when the number of CHWs was adjusted to proportionate to the number of eligible children. Microplanning household visit itinerary Our estimate of 37% of SMC coverage based on CHWs' door-to-door tracking data is in line with previous findings [11] and possibly reflects current poor performance of SMC in the area. For millions of children in Sub-Saharan Africa, the door-to-door service delivery is a relief and often seen by stakeholders as an equity-focused strategy to leverage. In 2014, door-to-door delivery was deployed to maximize SMC coverage across the Sahel region, particularly to outreach those that are furthest from health facilities. But as with other interventions such as vaccination campaigns, the most vulnerable remain those that are remote and likely to be missed due to poor logistics or lack of microplanning. Our model estimates 21–63% increase in SMC coverage following the introduction of microplanning as 36% of households, previously missed by CHWs under current SMC deployment mode were recovered by microplanning. Using tracking systems and GIS-based microplanning, polio eradication teams were able to recover up to 38% of missed settlements from 8 Nigerian's states during routine vaccine campaigns in 2013 [13] and an outreach of 31–43% (140 out of 322–441 settlements) during supplement immunization campaigns in Kano state [15]. Missing households or entire settlements during door-to-door service delivery is not uncommon in rural areas in Sub Saharan Africa where most households are not connected with roads. Without maps of catchment areas and having to crisscross winding trails and bushes, CHWs' teams will not only walk longer distances but likely will miss households. A recent report highlights that the lack of planned itineraries and similarities between households, compounds contribute to confuse CHWs in their ability to identify the next household to be visited [14]. To be effective on the ground, SMC campaigns need to approach the excellence in delivery of clinical trials, but it must be practical and cost-effective. In clinical trials, households and participants are pre-identified and health workers trained to access them using rough sketch maps [26]. During campaigns, households are likely to be missed if biases in defining catchment areas are initially introduced in sketched paper maps or if such paper maps are not used that all [13, 15, 27]. The present work is on track to reproduce clinical trial delivery performance by focusing on tree basic needs for CHWs: go beyond rough sketching of settlements to use better microplanning of household visit itinerary, adjust CHW ratio based on population size and standardize treatment duration. Overall, our microplanning reduces treatment duration by 32% (12.5mn to 8.5mn per child) and can be standardized for better SMC performance. In Sub-Saharan Africa, census data are more often out of date or imprecise at sub-national levels while settlements identification [28,29,30,31] are not guaranteed to allow robust SMC deployment. Such biases are generally not accounted for during monitoring and evaluation efforts and often result in misleading SMC coverage estimates. The present work has the advantage to remotely assess family sizes [32] and adjust CHW ratio prior to SMC deployment. The work presents advantages of computing CHWs' optimized travel itineraries and combining them with local accessibility features and geographies to generate printable accessibility maps or to incorporate such maps into mobile applications to be used by CHWs and supervisors. Our work presents some limitations. We estimated population sizes of 2013 using a 2015 population raster which resulted in less uncertainties. For further estimates, updated population raster will be essential to capture population growth. The model optimizes SMC deployment on Day 1 only assuming that 2nd and 3rd SMC doses are well carried out by mothers on Day 2, 3. CHWs' tracking data under current SMC deployment mode were only available for one village (Rakolo). More field data from villages with varying characteristics are needed to improve model robustness, including uncertainty assessment toward formal model validation. To predict SMC coverage, we did not account for fractions of kids vomiting, refusals or those not found at home (travelers or absence). Such data will be helpful in improving predictions of SMC coverage, and could be accounted for as they become available. Other challenges remain including reaching out to pastoralist and displaced populations due to conflicts. Finally, our salesman algorithm assumes catchment areas are free of physical barriers to generate free walking itineraries and therefore is currently not applicable to urban areas with excessive clustering. As next steps, we plan to include road - network processing to overcome such challenges in urban areas. Strengths, implications for other health programs, policy and practicality The chronic underfunding of the health care system in low- and middle-income countries (LMCI) [33], has led to significant disparities in access to care across different settings. In Burkina Faso for instance the ratio of health infrastructure and of clinic visits per capita is estimated to be 1.03 (range 0.11–2.03) per 10,000 and 0.6 (range 0.08–1.01) per person per year respectively. Most deprived populations live in rural and remote settings [34]. Microplanning using satellite imagery and other geographic information systems [13, 15] are emerging strategies in door-to-door delivery and likely has recently contributed to wild polio eradication from Africa [35]. In the time of COVID-19, and to maximize the distribution of Insecticide Treated Net in COVID-19 affected countries, stakeholders for malaria prevention recently called out the need of microplanning strategies using topographic and route mapping [36] suggesting that our work may be helpful. As CHWs have emerged as critical human resources to health systems in LMCI, we believe that the present work might be an opportunity to assist and improve community health programs. To conclude, our work shows that microplanning contributes to extend SMC coverage by 21–63% in villages of Burkina Faso and may be reproducible elsewhere using free population raster. To the best of our knowledge, our work is first to assess opportunities of using microplanning strategies to improve SMC deployment on the ground. While this work focuses on SMC drug delivery, the microplanning strategy behind by addressing both households visit itinerary and CHW ratio per capita may have broader applicability for many other health service delivery programs such as vaccination, family planning or nutrition. Data will be made available on reasonable request to the corresponding author CIESIN: Center for International Earth Science Information Network CHW: Community health worker DHIS2: District Health Information Software 2 HBHI: High burden to high impact LMIC: Low- and Middle- Income Countries SMC: Seasonal Malaria Chemoprevention TSP: Traveling salesman problem World Health Organization. World Malaria Report. Geneva: WHO Press; 2019. http://www.hoint/malaria/wmr. World Health Organization. Global Technical Strategy for Malaria 2016–2030. Geneva: WHO Press, World Health Organization; 2015. https://www.hoint/malaria/publications/atoz/9789241564991/en/. Meremikwu MM, Donegan S, Sinclair D, Esu E, Oringanje C. Intermittent preventive treatment for malaria in children living in areas with seasonal transmission. Cochrane Database Syst Rev. 2012:CD003756. Wilson AL, Taskforce IP. A systematic review and meta-analysis of the efficacy and safety of intermittent preventive treatment of malaria in children (IPTc). PLoS One. 2011;6:e16976. Zongo I, Milligan P, Compaore YD, et al. Randomized noninferiority trial of Dihydroartemisinin-Piperaquine compared with Sulfadoxine-Pyrimethamine plus Amodiaquine for seasonal malaria chemoprevention in Burkina Faso. Antimicrob Agents Chemother. 2015;59:4387–96. Cairns M, Roca-Feltrer A, Garske T, et al. Estimating the potential public health impact of seasonal malaria chemoprevention in African children. Nat Commun. 2012;3:881. Barry A, Issiaka D, Traore T, et al. Optimal mode for delivery of seasonal malaria chemoprevention in Ouelessebougou, Mali: A cluster randomized trial. PLoS One. 2018;13:e0193296. Bojang KA, Akor F, Conteh L, et al. Two strategies for the delivery of IPTc in an area of seasonal malaria transmission in the Gambia: a randomised controlled trial. PLoS Med. 2011;8:e1000409. Kweku M, Webster J, Adjuik M, Abudey S, Greenwood B, Chandramohan D. Options for the delivery of intermittent preventive treatment for malaria to children: a community randomised trial. PLoS One. 2009;4:e7256. Diawara F, Steinhardt LC, Mahamar A, et al. Measuring the impact of seasonal malaria chemoprevention as part of routine malaria control in Kita, Mali. Malar J. 2017;16:325. Compaore R, Yameogo MWE, Millogo T, Tougri H, Kouanda S. Evaluation of the implementation fidelity of the seasonal malaria chemoprevention intervention in Kaya health district, Burkina Faso. PLoS One. 2017;12:e0187460. WHO World Health Organization. Guidelines for the treatment of malaria, 2006. [http://www.hoint/malaria/docs/TreatmentGuidelines]. Barau I, Zubairu M, Mwanza MN, Seaman VY. Improving polio vaccination coverage in Nigeria through the use of geographic information system technology. J Infect Dis. 2014;210(Suppl 1):S102–10. Malaria Consortium. Notes from a site visit to a seasonal malaria chemoprevention (SMC) program supported by Malaria Consortium in Burkina Faso, August 18–22, 2019, 2017. https://filesgivewellorg/files/conversations/Malaria_Consortium_site_visit_in_Burkina_Faso_August_18_22_2019.pdf. Gali E, Mkanda P, Banda R, et al. Revised Household-Based Microplanning in Polio Supplemental Immunization Activities in Kano State, Nigeria. 2013–2014. J Infect Dis. 2016;213(Suppl 3):S73–8. Ouedraogo AL, de Vlas SJ, Nebie I, et al. Seasonal patterns of plasmodium falciparum gametocyte prevalence and density in a rural population of Burkina Faso. Acta Trop. 2008;105:28–34. Ouedraogo AL, Goncalves BP, Gneme A, et al. Dynamics of the human infectious reservoir for malaria determined by mosquito feeding assays and ultrasensitive malaria diagnosis in Burkina Faso. J Infect Dis. 2016;213:90–9. Derra K, Rouamba E, Kazienga A, et al. Profile: Nanoro health and demographic surveillance system. Int J Epidemiol. 2012;41:1293–301. Facebook Connectivity Lab and Center for International Earth Science Information Network - CIESIN - Columbia University. High Resolution Settlement Layer (HRSL). 2016. https://www.ciesin.columbia.edu/data/hrsl/. Seattle - United States: Institute for Health Metrics and Evaluation (IHME). Global Burden of Disease Collaborative Network. Global Burden of Disease Study 2017 (GBD 2017) Population Estimates 1950–2017. http://ghdxhealthdataorg/record/ihme-data/gbd-2017-population-estimates-1950-2017 2015. Held M, Karp RM. A dynamic programming approach to sequencing problems. J Soc Ind Appl Math. 1962;10:196–210. Bennett K, Bradley P, Demiriz A. Constrained k-means clustering. Technical Report MSR-TR-2000-65. Microsoft Research, Redmond, WA. 2000. Zongo I OJ, Lal S, Cairns M, Scott S, Snell P, Moroso D, Milligan P. Coverage of SMC in Burkina Faso 2017. https://filesgivewellorg/files/DWDA%202009/Malaria%20Consortium/SMC_in_Burkina_Faso_Coverage_surveys_2017.pdf 2018. Blanford JI, Kumar S, Luo W, MacEachren AM. It's a long, long walk: accessibility to hospitals, maternity and integrated health centers in Niger. Int J Health Geogr. 2012;11:24. Makanga PT, Schuurman N, Sacoor C, et al. Seasonal variation in geographical access to maternal health services in regions of southern Mozambique. Int J Health Geogr. 2017;16:1. Ba EH, Pitt C, Dial Y, et al. Implementation, coverage and equity of large-scale door-to-door delivery of seasonal malaria chemoprevention (SMC) to children under 10 in Senegal. Sci Rep. 2018;8:5489. Dougherty L, Abdulkarim M, Mikailu F, et al. From paper maps to digital maps: enhancing routine immunisation microplanning in Northern Nigeria. BMJ Glob Health. 2019;4:e001606. Gidado SO, Ohuabunwo C, Nguku PM, et al. Outreach to underserved communities in northern Nigeria, 2012-2013. J Infect Dis. 2014;210(Suppl 1):S118–24. Kamadjeu R. Tracking the polio virus down the Congo River: a case study on the use of Google earth in public health planning and mapping. Int J Health Geogr. 2009;8:4. Kamanga A, Renn S, Pollard D, et al. Open-source satellite enumeration to map households: planning and targeting indoor residual spraying for malaria. Malar J. 2015;14:345. Kelly GC, Seng CM, Donald W, et al. A spatial decision support system for guiding focal indoor residual interventions in a malaria elimination zone. Geospat Health. 2011;6:21–31. Checchi F, Stewart BT, Palmer JJ, Grundy C. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations. Int J Health Geogr. 2013;12:4. Oleribe OO, Momoh J, Uzochukwu BS, et al. Identifying key challenges facing healthcare systems in Africa and potential solutions. Int J Gen Med. 2019;12:395–403. Zon H, Pavlova M, Groot W. Regional health disparities in Burkina Faso during the period of health care decentralization. Results of a macro-level analysis. Int J Health Plann Manag. 2020;35:939–59. Guglielmi G. Africa declared free from wild polio - but vaccine-derived strains remain. Nature. 2020. The Alliance for Malaria Prevention (AMP). Considerations for distribution of insecticide-treated nets (ITNs) in COVID-19 affected countries. https://endmalariaorg/sites/default/files/Considerations%20for%20distribution%20of%20insecticide-treated%20nets%20%28ITNs2920in%20COVID-19%20affected%20countries.pdf 2020. We thank the Ministry of Health and the primary health facility nurses and community's health workers of Burkina for explaining their contribution to SMC campaigns in Burkina. We thank the personnel of the National Malaria Control Program in Burkina for their detailed description of SMC deployment in Burkina Faso. Thank you to Karim Derra from the Clinical Research Unit of Nanoro, Burkina Faso and Kurt Frey from the Institute for Disease Modeling for helpful discussions. This publication is based on research funded in whole or in part by the Bill & Melinda Gates Foundation, including models and data analysis performed by the Institute for Disease Modeling at the Bill & Melinda Gates Foundation. JZ was additionally supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518. Institute for Disease Modeling, Bill and Melinda Gates Foundation, 500 5th Ave N, Seattle, WA, 98109, USA André Lin Ouédraogo & Edward A. Wenger Department of Mathematics and Statistics, University of Washington, Seattle, WA, USA Julie Zhang Department of Statistics, Stanford University, Palo Alto, CA, USA Institut de Recherche en Sciences de la Santé, Clinical Research Unit of Nanoro, Nanoro, Burkina Faso Halidou Tinto & Innocent Valéa André Lin Ouédraogo Halidou Tinto Innocent Valéa Edward A. Wenger ALO conceived and developed the microplanning modeling framework. JZ and EW contributed to model framework development and methods. HT and IV provided household survey data, CHWs' tracking data and malaria incidence data. ALO run the simulations an analyzed the data. ALO wrote the original draft of the manuscript. All authors contributed intellectually and made contributions to the manuscript text. JZ is a PhD student at Stanford University, Department of Statistics. EAW is Deputy Director of Research Technology at the Institute for Disease Modeling IV is Deputy Director at IRSS - Clinical Research Unit of Nanoro, Burkina Faso HT is Director at IRSS - Clinical Research Unit of Nanoro, Burkina Faso Correspondence to André Lin Ouédraogo. Data presented here were from publicly available sources and did not require ethical clearance. Additional file 1: Supplementary Figure 1. Malaria incidence as reported by the Nanoro inpatient facility. Optimized household visit itineraries over OpenStreetMap in Soaw. Additional file 3: Supplemental methods. Extraction of households' global positioning system (GPS) coordinates and family sizes from population raster. Ouédraogo, A.L., Zhang, J., Tinto, H. et al. A microplanning model to improve door-to-door health service delivery: the case of Seasonal Malaria Chemoprevention in Sub-Saharan African villages. BMC Health Serv Res 20, 1128 (2020). https://doi.org/10.1186/s12913-020-05972-2 Microplanning Satellite imagery
CommonCrawl
Animal Systematics, Evolution and Diversity The Korean Society of Systematic Zoology 2234-8190(eISSN) Agriculture, Fishery and Food > Science of Animal Resources http://www.e-ased.org KSCI KCI Issue spc9 Issue nspc8 Volume 1 Issue 1_2 New Record of Two Apokeronopsis Species (Ciliophora: Urostylida: Pseudokeronopsidae) from Korea Jung, Jae-Ho;Baek, Ye-Seul;Min, Gi-Sik 115 https://doi.org/10.5635/KJSZ.2011.27.2.115 PDF KSCI The morphology of the two marine hypotrichous ciliates Apokeronopsis bergeri and A. ovalis, isolated from the Yellow Sea, Korea, are described based on live and protargol-impregnated specimens. It is the first time that these species have been recorded in Korea. In addition, the small subunit ribosomal RNA gene was sequenced for comparison with the public database. The genus Apokeronopsis has recently been established in the family Pseudokeronopsidae, and the two congeners of the Korean population share the following characteristics: one row of one or more buccal cirri; usually two frontoterminal cirri; midventral complex composed of two distinctly separated rows; one left and one right marginal row; number of transverse cirri, more than eight; absence of caudal cirri; two types of cortical granules. Apokeronopsis bergeri differs from A. ovalis primarily in body shape (fusiform vs. oval form), size (usually $260{\times}80{\mu}m$ vs. $160{\times}55{\mu}m$), type II cortical granules (oval vs. round shape; yellow-green vs. mostly colourless and only a few yellow-green in colour), and morphometric data (75-106 vs. 53-70 in adoral membranelles; 37-47 vs. 24-36 in frontal cirri; 9-15 vs. 1-2 in buccal cirri), as well as molecular data (2.87% of pairwise distance). First Record of Paciforchestia pyatakovi (Crustacea: Amphipoda: Talitridae) from Korea Kim, Min-Seop;Min, Gi-Sik 123 The primitive beachflea, Paciforchestia pyatakovi (Derzhavin, 1937) living in gravel beaches was previously reported only from Japan and Russia. This species can be easily distinguished from another species in the same genus, P. klawei (Bousfield, 1961), by the degradation of all plepods and both rami that each consist of single article bearing 2-4 distal plumose setae. Descriptions of the diagnostic characteristics of the species are presented. Morphological Descriptions of Four Oligotrich Ciliates (Ciliophora: Oligotrichia) from Southern Coast of Korea Lee, Eun-Sun;Shin, Mann-Kyoon;Kim, Young-Ok 131 For the purpose of taxonomical description of marine oligotrich ciliates, water samples were collected from the southern coast of Korea (Masan Bay and Jangmok Bay). Ciliate cells were identified based on protargol impregnated specimens. As a result, four oligotrich ciliates were identified and redescribed: Rimostrombidium conicum (Kahl, 1932), Omegastrombidium kahli Song et al., 2009 and Spirotontonia turbinata (Song and Bradbury, 1998), and Spirotontonia grandis (Suzuki and Han, 2000). Of them, R. conicum, O. kahli, and S. turbinata are newly recorded and S. grandis is recorded for the second time in Korea, while the last one is redescribed to compare its variations according to locality. In addition, their abundances were analyzed and discussed the changes in accordance with water temperature and salinity. Taxonomic Study of the Genus Thalia (Thaliacea: Salpida: Salpidae) from Korea Kim, Sun-Woo;Won, Jung-Hye;Kim, Chang-Bae 142 Five species in the genus Thalia of the family Salpidae are described: Thalia cicar van Soest, 1973, Thalia democratica (Forskal, 1775), Thalia orientalis Tokioka, 1937, Thalia rhomboides (Quoy and Gaimard, 1824), and Thalia sibogae van Soest, 1973. All of these species are new to the Korean fauna. A key to the Korean Thalia species is provided. Fish Community Structure in the Pyeongchanggang River Choi, Jun-Kil 151 Fish community structure in the Pyeongchanggang River was investigated from April to November 2009. About 900 individuals representing 24 species from eight families at six sites in the Pyeongchanggang River were collected. It was similar to the 2001's survey and it was less than 2006's survey. The Korean endemic species, Zacco koreanus was the most abundant, whereas subdominant species were native species, such as Pungtungia herzi, Zacco platypus, Rhynchocypris kumgangensis and Rhynchocypris oxycephalus. Three endangered species were collected at the sampling area, Acheilognathus signifier (relative abundance [RA] 0.9%), Pseudopungtungia tenuicorpa (RA 1.4%), and Cottus koreanus (RA 3.6%). One natural monument species, Hemibarbus mylodon, was included. According to the analysis of ecological indicator characteristics, the relative proportion of tolerant species was 6.3% (57 individuals), whereas the proportion of sensitive species was 65.9% (593 individuals). Species evenness, richness and diversity indices decreased gradually through the month from April to November during the study. Community indices in Pyeongchanggang River showed a high evenness index (J'>0.6), a low level of species richness (R<3.5) and a medium level of diversity (1.5<3.5). An ecological river health assessment based on the index of biological integrity (IBI), indicated that ecological river health varied depending on location and time of sampling. However, the average IBI score was 25 (n=24), indicating a "fair condition" in the Pyeongchanggang River. DNA Barcode Examination of Bryozoa (Class: Gymnolaemata) in Korean Seawater Lee, Hyun-Jung;Kwan, Ye-Seul;Kong, So-Ra;Min, Bum-Sik;Seo, Ji-Eun;Won, Yong-Jin 159 DNA barcoding of Bryozoa or "moss animals" has hardly advanced and lacks reference sequences for correct species identification. To date only a small number of cytochrome c oxidase subunit I (COI) sequences from 82 bryozoan species have been deposited in the National Center for Biotechnology Information (NCBI) GenBank and Barcode of Life Data Systems (BOLD). We here report COI data from 53 individual samples of 29 bryozoan species collected from Korean seawater. To our knowledge this is the single largest gathering of COI barcode data of bryozoans to date. The average genetic divergence was estimated as 23.3% among species of the same genus, 25% among genera of the same family, and 1.7% at intraspecific level with a few rare exceptions having a large difference, indicating a possibility of presence of cryptic species. Our data show that COI is a very appropriate marker for species identification of bryozoans, but does not provide enough phylogenetic information at higher taxonomic ranks. Greater effort involving larger taxon sampling for the barcode analyses is needed for bryozoan taxonomy. A New Record of Collix stellata (Lepidoptera: Geometridae) from Korea Choi, Sei-Woong;Na, Sang-Hyun 164 We report a larentiine species, Collix stellata Warren, for the first time from Korea. Two males and one female were collected from Jeju-do Island, South Korea. Collix stellata is similar to Collix ghosa Walker in external appearances, but can be distinguished by the relatively larger discal dot on forewing and the relatively slender valva with distally projected margin of male genitalia. Diagnosis and description of the species are given with the figures of the genitalia. First Record of the Genus Pseudodryinus (Hymenoptera: Dryinidae: Dryininae) in Korea Kim, Chang-Jun;Choi, Gang-Won;Lee, Jong-Wook 167 The genus Pseudodryinus Olmi, 1989 belonging to the subfamily Dryininae has been reported for the first time in Korea, base on the discovery Pseudodryinus shihoae Mita, 2009. We have provided a redescription and photographs of the diagnostic characteristics of the species. First Record of the Little-known Genus Hellwigia (Hymenoptera: Ichneumonidae: Campopleginae) from Korea Choi, Jin-Kyung;Jeong, Jong-Chul;Lee, Jong-Wook 170 We report on the discovery of Hellwigia obscura Gravenhorst, 1823, a species new to Korea. A description based on both sexes with photographs and a key to the world Hellwigia species are provided. First Record of the Genus Pagurixus (Crustacea: Decapoda: Anomura: Paguridae) from Hyung-ge Island, Southern Korea Kim, Mi-Hyang;Kim, Jung-Nyun;Oh, Chul-Woong 176 A pagurid hermit crab, Pagurixus patiae collected from Hyung-ge Island, Busan, southern Korea is newly recorded into the Korean fauna. Pagurixus patiae is the only species of the genus recorded in Korea. Morphological descriptions of P. patiae are provided. A Newly Recorded Sea Star (Asteroidea: Forcipulatida: Asteriidae) from the East Sea, Korea Lee, Taek-Jun;Shin, Sook 180 Sea stars were collected with fishing nets between depths of 40-150 m from the Gangwon-do coastal region, East Sea. Specimens were identified as Evasterias echinosoma Fisher, 1926 belonging to the family Asteriidae, which is new to the Korean fauna. This species was characterized by strong external spines and a general size of more than 200 mm, thus the largest sea star identified in Korea to date. Its morphological characteristics are described here with photos. Thirty two asteroid species including E. echinosoma have been reported from the East Sea of Korea. New Records of Three Xanthid Crabs (Decapoda: Brachyura: Xanthidae) from Jejudo Island in Korea Lee, Seok-Hyun;Ko, Hyun-Sook 183 Three xanthid crabs, Liomera margaritata, Neoxanthops lineatus and Pilodius miersi, are described and illustrated for the first time in Korea. Liomera margaritata and N. lineatus are the first species of their genera to be found in Korea. The three species represent extensions of their previously known ranges and bring the number of known species of the xanthid crabs to 25 from Korean waters. First Records of Two Pilumnid Crabs (Crustacea: Decapoda) Collected from Jejudo Island, Southern Korea Lee, Seok-Hyun;Lee, Kyu-Hyun;Ko, Hyun-Sook 191 Two pilumnid species, Echinoecus nipponicus and Zehntneriana amakusae, are described and illustrated from Jejudo Island, southern Korea. These species are recorded for the first time in Korea, and Z. amakusae is the sole representative of the subfamily Rhizopinae of the family Pilumnidae. A New Record of Invasive Alien Colonial Tunicate Clavelina lepadiformis (Ascidiacea: Aplousobranchia: Clavelinidae) in Korea Pyo, Joo-Yeon;Shin, Sook 197 Tunicates were collected from three harbors (Gampo, Bangeojin, Daebyeon) in Gyeongsangnam-do and one harbor (Seogwipo) in Jejudo Island during the period from August 2008 to January 2011 and were identified on the basis of their morphological characteristics. Among them, colonial tunicate Clavelina lepadiformis (Muller, 1776) belonging to the family Clavelinidae was found to be an invasive alien species introduced from the North Atlantic, and this is the first record of its occurrence in Korea. A New Record of Melita bingoensis (Crustacea: Amphipoda: Melitidae) from Korea Shin, Myung-Hwa;Kim, Won 201 A melitid species, Melita bingoensis Yamato, 1987 collected from the Yellow Sea, is reported for the first time in Korea. This species has a slit-like pocket in coxa 6 of female, which is not found in other Korean melitid amphipods. In this paper, we compared this species with three other previously known Korean species in the same genus.
CommonCrawl
View source for Type I errors Back to page | ← Type I errors {{StatsPsy}} See also:[[Type I and type II errors]] for more context A '''Type I error''' , also called a '''false positive''', exists when a test incorrectly reports that it has found a result where none really exists. ==False positive rate== The '''false positive rate''' is the proportion of negative instances that were erroneously reported as positive. It is equal to 1 minus the [[specificity]] of the test. : <math>{\rm false\ positive\ rate} = \frac{\rm number\ of\ false\ positives}{\rm number\ of\ negatives}</math> In [[statistical hypothesis testing]], this fraction is sometimes described as the ''size'' of the test, and is given the symbol [[alpha (letter)|α]]. ==False positives vs. false negatives== When developing detection algorithms (that is, tests) there is a tradeoff between false positives, and [[false negative]]s (in which an actual match is not detected). A [[threshold]] value can be varied to make the algorithm more restrictive or more sensitive. Restrictive algorithms risk rejecting true positives while more sensitive algorithms risk accepting false positives. ==False positives in medicine== False positives are a significant issue in [[medicine|medical]] testing. In some cases, there are two or more tests that can be used, one of which is simpler and less expensive, but less accurate, than the other. For example, the simplest tests for [[HIV]] and [[hepatitis]] in blood have a significant rate of false positives. These tests are used to screen out possible [[blood transfusion|blood donors]], but more expensive and more precise tests are used in medical practice, to determine whether a person is actually infected with these viruses. Perhaps the most widely discussed false positives in medicine come from screening [[mammography]], a test to detect breast cancer. The US rate of false positive mammograms is up to 15%, the highest in world. The lowest rate in the world is in Holland, 1%. The lowest rates are generally in Northern Europe where mammography films are read twice and a high threshold for additional testing is set. One consequence of the US's high false positive rate is that, in a ten year period, half of American women receive a false positive mammogram. False positive mammograms are costly, with over $100 million spent annually in the US on unnecessary follow-up testing and treatment. They also cause women unneeded anxiety. Research has shown that the anxiety associated with receiving a false positive can be reduced if the time between the abnormal result and the all clear is reduced. False positives are also problematic in [[biometric]] scans, such as [[retina]] scans or [[Facial recognition system|facial recognition]], when the scanner incorrectly identifies someone as matching a known person, either a person who is entitled to enter the system, or a suspected criminal. False positives can produce serious and counterintuitive problems when the condition being searched for is rare. If a test has a false positive rate of one in ten thousand, but only one in a million samples (or people) is a true positive, most of the "positives" detected by the test will be false. The probability that an observed positive result is a false positive may be calculated, and the problem of false positives demonstrated, using [[Bayes' theorem#An example: False positives|Bayes' theorem]]. ==False positives in computer database searching== In computer database searching, false positives are documents that are retrieved by a search despite their [[Relevance (Information Retrieval)| irrelevance ]] to the search question. False positives are common in [[full text search|full text searching]], in which the [[search algorithm]] examines all of the text in all of the stored documents in an attempt to match one or more search terms supplied by the user. Most false positives can be attributed to the deficiencies of [[natural language]], which is often ambiguous: the term "home," for example, may mean "a person's dwelling" or "the main or top-level page in a Web site." The false positive rate can be reduced by using a [[controlled vocabulary]], but this solution is expensive because the vocabulary must be developed by an expert and applied to documents by trained indexers. ==False positives and spam== The term "False positive" is also used when [[spam filtering]] or spam blocking techniques wrongly classify a legitimate email message as [[e-mail spam|spam]] and as a result interferes with its delivery. The opposite, a False Negative, occurs when filtering allows a spam email to be delivered to a user's inbox. While most anti-spam tactics can block or filter a high percentage of unwanted emails, doing so without creating significant false-positive results is a much more demanding task. A commonly referenced sub-category is the "Critical False-Positive." This term is used to distinguish the accidental blocking of mass-emails that may not be spam, but are not generally regarded as critical communications, in contrast with user to user messages and automated transaction notifications where timely delivery is much more important. ==False positives and [[malware]]== The term '''False positive''' is also used when [[antivirus]] software wrongly classifies a file as a [[virus]]. The incorrect detection may occur either by [[heuristics]] or by an incorrect virus signature in a database. Similar problems can occur with [[antitrojan]] or [[antispyware]] software. ==False positives and [[ghost]] investigation== '''False positive''' has been adopted by [[paranormal]] or [[ghost]] investigation groups to describe a photograph, recording, or other evidence that incorrectly appears to have a paranormal origin. In other words, a '''false positive''' in this context is a disproven piece of media (image, movie, audio recording, etc.) that has a normal explanation. Several sites provide examples of false positives, including [http://the-atlantic-paranormal-society.com/images/tapspics/index.html The Atlantic Paranormal Society (TAPS)] and [http://www.moorestownghostresearch.com/FalsePositives.html Moorestown Ghost Research]. ==See also== <!-- please keep this list in alphabetical order--> * [[Type I and type II errors]] * [[False Negative]] * [[Free text search]] * [[Information retrieval#Performance measures|Information retrieval performance measures]] * [[Odds ratio]] * [[Receiver-operator characteristic]] * [[Search engine]] * [[Sensitivity (tests)|Sensitivity testing ]] * [[Specificity]] * [[Type II error]] * [[Type III error]] ==References== *Abramson, I., Wolfson, T., Marcotte, T. D., & Grant, I. (1999). Extending the p-plot: Heuristics for multiple testing: Journal of the International Neuropsychological Society Vol 5(6) Sep 1999, 510-517. *Aguinis, H., Sturman, M. C., & Pierce, C. A. (2008). Comparison of three meta-analytic procedures for estimating moderating effects of categorical variables: Organizational Research Methods Vol 11(1) Jan 2008, 9-34. *Akins, R. N. (1993). The robustness of the two- and three-predictor random regression models under conditions of.8-super(.2),.7-super(.3),.6-super(.4), and.5-super(.5) dichotomous proportion splits and sample sizes of N=15, 30, and 45: Dissertation Abstracts International. *Alexander, R. A., Hanges, P. J., & Alliger, G. M. (1985). An empirical examination of two transformations of sample correlations: Educational and Psychological Measurement Vol 45(4) Win 1985, 797-801. *Algina, J. (1994). Some alternative approximate tests for a split plot design: Multivariate Behavioral Research Vol 29(4) 1994, 365-384. *Algina, J., Blair, R. C., & Coombs, W. T. (1995). A maximum test for scale: Type I error rates and power: Journal of Educational and Behavioral Statistics Vol 20(1) Spr 1995, 27-39. *Algina, J., Olejnik, S., & Ocanto, R. (1989). Type I error rates and power estimates for selected two-sample tests of scale: Journal of Educational Statistics Vol 14(4) Win 1989, 373-384. *Algina, J., & Oshima, T. C. (1994). Type I error rates for Huynh's general approximation and improved general approximation tests: British Journal of Mathematical and Statistical Psychology Vol 47(1) May 1994, 151-165. *Algina, J., & Oshima, T. C. (1995). An improved general approximation test for the main effect in a split-plot design: British Journal of Mathematical and Statistical Psychology Vol 48(1) May 1995, 149-160. *Algina, J., Oshima, T. C., & Lin, W.-Y. (1994). Type I error rates for Welch's test and James's second-order test under nonnormality and inequality of variance when there are two groups: Journal of Educational and Behavioral Statistics Vol 19(3) Fal 1994, 275-291. *Algina, J., Oshima, T. C., & Tang, K. L. (1991). Robustness of Yao's, James', and Johansen's tests under variance-covariance heteroscedasticity and nonnormality: Journal of Educational Statistics Vol 16(2) Sum 1991, 125-139. *Algina, J., & Tang, K. L. (1988). Type I error rates for Yao's and James' tests of equality of mean vectors under variance-covariance heteroscedasticity: Journal of Educational Statistics Vol 13(3) Fal 1988, 281-290. *Allison, D. B., Franklin, R. D., & Heshka, S. (1992). Reflections on visual inspection, response guided experimentation, and Type I error rate in single-case designs: Journal of Experimental Education Vol 61(1) Fal 1992, 45-51. *Alpaydin, E. (1999). Combined 5x2 cv F test for comparing supervised classification learning algorithms: Neural Computation Vol 11(8) Nov 1999, 1885-1892. *Alsawalmeh, Y. M., & Feldt, L. S. (1999). Testing the equality of two independent alpha coefficients adjusted by the Spearman-Brown formula: Applied Psychological Measurement Vol 23(4) Dec 1999, 363-370. *Ankenmann, R. D., Witt, E. A., & Dunbar, S. B. (1999). An investigation of the power of the likelihood ratio goodness-of-fit statistic in detecting differential item functioning: Journal of Educational Measurement Vol 36(4) Win 1999, 277-300. *Armstrong, S. A., & Henson, R. K. (2005). Statistical Practices of IJPT Researchers: A Review from 1993-2000: International Journal of Play Therapy Vol 14(1) 2005, 7-26. *Attorresi, H. F., Galibert, M. S., Zanelli, M. L., Lozzia, G. S., & Aguerri, M. E. (2003). Type I Error in the Differential Item Functioning analysis based on the difficulty parameters difference: Psicologica Vol 24(2) 2003, 289-306. *Austin, E. (2004). Review of How to design and report experiments: British Journal of Mathematical and Statistical Psychology Vol 57(2) Nov 2004, 380-381. *Baer, D. M. (1977). Perhaps it would be better not to know everything: Journal of Applied Behavior Analysis Vol 10(1) Spr 1977, 167-172. *Baldwin, S. A., Murray, D. M., & Shadish, W. R. (2005). Empirically supported treatments or type I errors? Problems with the analysis of data from group-administered treatments: Journal of Consulting and Clinical Psychology Vol 73(5) Oct 2005, 924-935. *Ballard, K. D. (1983). The visual analysis of time series data: Issues affecting the assessment of behavioural interventions: New Zealand Journal of Psychology Vol 12(2) Nov 1983, 69-73. *Barcelona, R. J. (1993). Type I and II error rates for a traditional and new approach to validity generalization: Dissertation Abstracts International. *Barnes, M. J. (1981). The effects of kurtosis, skewness and sample size on the Type I error rates and power of tests of homogeneity of variance: Dissertation Abstracts International. *Batten, D. C. (1993). Truly multivariate repeated measures designs: Empirical evaluation of type I error and relative power for selected statistical procedures: Dissertation Abstracts International. *Belknap, J. K., Mitchell, S. R., O'Toole, L. A., & Helms, M. L. (1996). Type I and Type II error rates for quantitative trait loci (QTL) mapping studies using recombinant inbred mouse strains: Behavior Genetics Vol 26(2) Mar 1996, 149-160. *Benjamini, Y., & Hochberg, Y. (2000). On the adaptive control of the false discovery rate in multiple testing with independent statistics: Journal of Educational and Behavioral Statistics Vol 25(1) Spr 2000, 60-83. *Blair, R. C., Higgins, J. J., Topping, M. E., & Mortimer, A. L. (1983). An investigation of the robustness of the t test to unit of analysis violations: Educational and Psychological Measurement Vol 43(1) Spr 1983, 69-80. *Botella, J. (2002). Power of alternative tests for two paired samples with missing data: Psicothema Vol 14(1) Feb 2002, 174-180. *Bradley, J. V. (1978). Robustness? : British Journal of Mathematical and Statistical Psychology Vol 31(2) Nov 1978, 144-152. *Bridge, P. D. (1996). The comparative power of the independent-samples t-test and Wilcoxon Rank Sum test in non-normal distributions of real data sets in education and psychology. Dissertation Abstracts International Section A: Humanities and Social Sciences. *Briggs, N. E. (2007). Estimation of the standard error and confidence interval of the indirect effect in multiple mediator models. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Bush, L. K., Hess, U., & Wolford, G. (1993). Transformations for within-subject designs: A Monte Carlo investigation: Psychological Bulletin Vol 113(3) May 1993, 566-579. *Callender, J. C., & Osburn, H. G. (1988). Unbiased estimation of sampling variance of correlations: Journal of Applied Psychology Vol 73(2) May 1988, 312-315. *Camilli, G. (1990). The test of homogeneity for 2x2 contingency tables: A review of and some personal opinions on the controversy: Psychological Bulletin Vol 108(1) Jul 1990, 135-145. *Chandler, C. R. (1995). Practical considerations in the use of simultaneous inference for multiple tests: Animal Behaviour Vol 49(2) Feb 1995, 524-527. *Chen, R. S., & Dunlap, W. P. (1994). A Monte Carol study on the performance of a corrected formula for !e suggested by Lecoutre: Journal of Educational Statistics Vol 19(2) Sum 1994, 119-126. *Choi, J. (2006). Effect of categorization on type I error and power in ordinal indicator latent means models for between-subjects designs. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Choi, T., & Schervish, M. J. (2007). On posterior consistency in nonparametric regression problems: Journal of Multivariate Analysis Vol 98(10) Nov 2007, 1969-1987. *Church, J. D., & Wike, E. L. (1976). The robustness of homogeneity of variance tests for asymmetric distributions: A Monte Carlo study: Bulletin of the Psychonomic Society Vol 7(5) May 1976, 417-420. *Church, J. D., & Wike, E. L. (1979). A Monte Carlo study of nonparametric multiple-comparison tests for a two-way layout: Bulletin of the Psychonomic Society Vol 14(2) Aug 1979, 95-98. *Church, J. D., & Wike, E. L. (1980). Two Monte Carlo studies of Silverstein's nonparametric multiple comparison tests: Psychological Reports Vol 46(2) Apr 1980, 403-407. *Church, J. D., & Wike, E. L. (1981). Silverstein's nonparametric many-one test for a one-way design: A Monte Carlo study: Psychological Reports Vol 48(1) Feb 1981, 19-22. *Church, J. D., & Wike, E. L. (1981). Silverstein's nonparametric many-one test for a two-way design: A Monte Carlo study: Psychological Reports Vol 49(3) Dec 1981, 931-934. *Cicchetti, D. V. (1974). Reply to Keselman concerning Cicchetti's interpretation of the findings of Petrinovich and Hardyck: Psychological Bulletin Vol 81(11) Nov 1974, 896-897. *Cicchetti, D. V. (1987). More false conclusions about vineland standard scores: A reply to Silverstein's rejoinder: Psychological Reports Vol 60(3, Pt 2) Jun 1987, 1278. *Cicchetti, D. V., & Sparrow, S. S. (1986). False conclusions about Vineland Standard Scores: Silverstein's Type I errors and other Artifacts: American Journal of Mental Deficiency Vol 91(1) Jul 1986, 5-9. *Clement, T. H. (1975). Multiple comparison of means after analysis of covariance: Dissertation Abstracts International. *Clinch, J. J., & Keselman, H. J. (1982). Parametric alternatives to the analysis of variance: Journal of Educational Statistics Vol 7(3) Fal 1982, 207-214. *Cochrane, C.-Y. C., Dubnicka, S., & Loughin, T. (2005). Comparison of methods for analyzing replicated preference tests: Journal of Sensory Studies Vol 20(6) Dec 2005, 484-502. *Coffman, D. L. (2006). Consequences of violating the parameter drift assumption in covariance structure models. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Cohen, A. S., Kim, S.-H., & Wollack, J. A. (1996). An investigation of the likelihood ratio test for detection of differential item functioning: Applied Psychological Measurement Vol 20(1) Mar 1996, 15-26. *Cohen, J., & Nee, J. C. (1990). Robustness of Type I error and power in set correlation analysis of contingency tables: Multivariate Behavioral Research Vol 25(3) Jul 1990, 341-350. *Cohen, P. (1982). To be or not to be: Control and balancing of Type I and Type II errors: Evaluation and Program Planning Vol 5(3) 1982, 247-253. *Coombs, W. T., & Algina, J. (1996). New test statistics for MANOVA/descriptive discriminant analysis: Educational and Psychological Measurement Vol 56(3) Jun 1996, 382-402. *Coombs, W. T., & Algina, J. (1996). On sample size requirements for Johansen's test: Journal of Educational and Behavioral Statistics Vol 21(2) Sum 1996, 169-178. *Coombs, W. T., Algina, J., & Oltman, D. O. (1996). Univariate and multivariate omnibus hypothesis tests selected to control Type I error rates when population variances are not necessarily equal: Review of Educational Research Vol 66(2) Sum 1996, 137-179. *Cornwell, J. M. (1993). Monte Carlo comparisons of three tests for homogeneity of independent correlations: Educational and Psychological Measurement Vol 53(3) Fal 1993, 605-618. *Costlow, G. D. (1994). An investigation of the conditional probabilities associated with f ratios sharing a common denominator. Dissertation Abstracts International Section A: Humanities and Social Sciences. *Cotton, J. W. (1975). Review of A first reader in statistics. 2nd Ed: PsycCRITIQUES Vol 20 (5), May, 1975. *Cousins, P. C. (1984). Power and bias in the z score: A comparison of sequential analytic indices of contingency: Dissertation Abstracts International. *Cousins, P. C., Power, T. G., & Carbonari, J. P. (1986). Power and bias in the z score: A comparison of sequential analytic indices of contingency: Behavioral Assessment Vol 8(4) Fal 1986, 305-317. *Crawford, J. R., & Garthwaite, P. H. (2005). Evaluation of Criteria for Classical Dissociations in Single-Case Studies by Monte Carlo Simulation: Neuropsychology Vol 19(5) Sep 2005, 664-678. *Crawford, J. R., & Garthwaite, P. H. (2005). Testing for Suspected Impairments and Dissociations in Single-Case Studies in Neuropsychology: Evaluation of Alternatives Using Monte Carlo Simulations and Revised Tests for Dissociations: Neuropsychology Vol 19(3) May 2005, 318-331. *Crawford, J. R., & Garthwaite, P. H. (2006). Detecting dissociations in single-case studies: Type I errors, statistical power and the classical versus strong distinction: Neuropsychologia Vol 44(12) 2006, 2249-2258. *Crawford, J. R., Garthwaite, P. H., Azzalini, A., Howell, D. C., & Laws, K. R. (2006). Testing for a deficit in single-case studies: Effects of departures from normality: Neuropsychologia Vol 44(4) 2006, 666-677. *Cribbie, R. A. (2000). Evaluating the importance of individual parameters in structural equation modeling: The need for type I error control: Personality and Individual Differences Vol 29(3) Sep 2000, 567-577. *Cribbie, R. A. (2003). Pairwise multiple comparisons: New yardstick, new results: Journal of Experimental Education Vol 71(3) Spr 2003, 251-265. *Cribbie, R. A. (2007). Multiplicity control in structural equation modeling: Structural Equation Modeling Vol 14(1) 2007, 98-112. *Cribbie, R. A., & Keselman, H. J. (2003). The effects of nonnormality on parametric, nonparametric, and model comparison approaches to pairwise comparisons: Educational and Psychological Measurement Vol 63(4) Aug 2003, 615-635. *Cribbie, R. A., & Keselman, H. J. (2003). Pairwise multiple comparisons: A model comparison approach versus stepwise procedures: British Journal of Mathematical and Statistical Psychology Vol 56(1) May 2003, 167-182. *Crosbie, J. (1987). The inability of the binomial test to control Type I error with single-subject data: Behavioral Assessment Vol 9(2) Spr 1987, 141-150. *Crosbie, J. (1993). Interrupted time-series analysis with brief single-subject data: Journal of Consulting and Clinical Psychology Vol 61(6) Dec 1993, 966-974. *Crosby, R. A. (1998). Condom use as a dependent variable: Measurement issues relevant to HIV prevention programs: AIDS Education and Prevention Vol 10(6) Dec 1998, 548-557. *Davis, C., & Gaito, J. (1984). Multiple comparison procedures within experimental research: Canadian Psychology/Psychologie Canadienne Vol 25(1) Jan 1984, 1-13. *de Cani, J. S. (1984). Balancing Type I risk and loss of power in ordered Bonferroni procedures: Journal of Educational Psychology Vol 76(6) Dec 1984, 1035-1037. *Deaton, W. L. (1978). A comparison of approximate solutions for fixed effects factorial analysis of variance with disproportionate cell sizes: Dissertation Abstracts International. *DeMars, C. E. (2004). Type I error rates for generalized graded unfolding model fit indices: Applied Psychological Measurement Vol 28(1) Jan 2004, 48-71. *Demars, C. E. (2005). Type I Error Rates for Parscale's Fit Index: Educational and Psychological Measurement Vol 65(1) Feb 2005, 42-50. *Dow, M. M. (1993). Saving the theory: On chi-square tests with cross-cultural survey data: Cross-Cultural Research: The Journal of Comparative Social Science Vol 27(3-4) Aug-Nov 1993, 247-276. *Dunlap, W. P., Greer, T., & Beatty, G. O. (1996). A Monte-Carlo study of Type I error rates and power for Tukey's pocket test: Journal of General Psychology Vol 123(4) Oct 1996, 333-339. *Egan, T. A. (1975). An empirical investigation into the effective Type-I error rates and the power estimates in small samples for suggested solutions to the Behrens-Fisher problem: Dissertation Abstracts International. *Elliott, S. D. (1989). The method of unweighted means in univariate and multivariate analysis of variance: Educational and Psychological Measurement Vol 49(2) Sum 1989, 399-405. *Erdfelder, E. (1985). Proof of alternative hypotheses: Notes on Pieter Koele's comments: Zeitschrift fur Sozialpsychologie Vol 16(1) 1985, 59-62. *Evans, L. D. (1992). Multiple IQ-achievement comparisons: Effects on severe discrepancy determination: Learning Disability Quarterly Vol 15(3) Sum 1992, 167-174. *Feild, H. S., & Armenakis, A. A. (1974). On use of multiple tests of significance in psychological research: Psychological Reports Vol 35(1, Pt 2) Aug 1974, 427-431. *Fernandez, P., Livacic-Rojas, P., & Vallejo, G. (2007). How to elect the best statistical analysis for to analyze a repeated measures design: International Journal of Clinical and Health Psychology Vol 7(1) Jan 2007, 153-175. *Ferreira, M. A. R. (2004). Linkage Analysis: Principles and Methods for the Analysis of Human Quantitative Traits: Twin Research Vol 7(5) Oct 2004, 513-530. *Ferron, J., Foster-Johnson, L., & Kromrey, J. D. (2003). The functions of single-case randomization tests with and without random assignment: Journal of Experimental Education Vol 71(3) Spr 2003, 267-288. *Ferron, J., & Jones, P. K. (2006). Tests for the Visual Analysis of Response-Guided Multiple-Baseline Data: Journal of Experimental Education Vol 75(1) Fal 2006, 66-81. *Fidalgo, A. M., Ferreres, D., & Muniz, J. (2004). Liberal and Conservative Differential Item Functioning Detection Using Mantel-Haenszel and SIBTEST: Implications for Type I and Type II Error Rates: Journal of Experimental Education Vol 73(1) Fal 2004, 23-39. *Fidalgo, A. M., Ferreres, D., & Muniz, J. (2004). Utility of the Mantel-Haenszel Procedure for Detecting Differential Item Functioning in Small Samples: Educational and Psychological Measurement Vol 64(6) Dec 2004, 925-936. *Forster, K. I., & Dickinson, R. G. (1976). More on the Language-as-fixed-effect fallacy: Monte Carlo estimates of error rates for F-sub-1, F-sub-2, and F', and min F': Journal of Verbal Learning & Verbal Behavior Vol 15(2) Apr 1976, 135-142. *Fouladi, R. T. (2000). Performance of modified test statistics in covariance and correlation structure analysis under conditions of multivariate nonnormality: Structural Equation Modeling Vol 7(3) 2000, 356-410. *Gaito, J., & Davis, C. (1985). Response to T. A. Ryan's comments: Canadian Psychology/Psychologie Canadienne Vol 26(1) Jan 1985, 78-79. *Gamage, J., Mathew, T., & Weerahandi, S. (2004). Generalized p-values and generalized confidence regions for the multivariate Behrens-Fisher problem and MANOVA: Journal of Multivariate Analysis Vol 88(1) Jan 2004, 177-189. *Games, P. A. (1973). On Gaito's index of estimation to ascertain the effect of unequal n on ANOVA F tests: American Psychologist Vol 28(7) Jul 1973, 624. *Games, P. A. (1978). A three-factor model encompassing many possible statistical tests on independent groups: Psychological Bulletin Vol 85(1) Jan 1978, 168-182. *Games, P. A., Keselman, H. J., & Clinch, J. J. (1979). Multiple comparisons of variance heterogeneity: British Journal of Mathematical and Statistical Psychology Vol 32(1) May 1979, 133-142. *Gelin, M. N. (2005). Type I error rates of the DIF MIMIC approach using Joreskog's covariance matrix with ML and WLS estimation. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Gessaroli, M. E. (1986). A Monte Carlo investigation of the Type 1 error rates of three multivariate tests applied to categorical data: Dissertation Abstracts International. *Gessaroli, M. E., & De Champlain, A. F. (1996). Using an approximate chi-square statistic to test the number of dimensions underlying the responses to a set of items: Journal of Educational Measurement Vol 33(2) Sum 1996, 157-179. *Gessaroli, M. E., & Schutz, R. W. (1983). Variable error: Variance-covariance heterogeneity, block size and Type I error rates: Journal of Motor Behavior Vol 15(1) Mar 1983, 74-95. *Giffin, M. E. (1984). Item bias detection methods for small samples: Dissertation Abstracts International. *Glas, C. A. W., & Falcon, J. C. S. (2003). A comparison of item-fit statistics for the three-parameter logistic model: Applied Psychological Measurement Vol 27(2) Mar 2003, 87-106. *Gonzalez-Roma, V., Hernandez, A., & Gomez-Benito, J. (2006). Power and Type I Error of the Mean and Covariance Structure Analysis Model for Detecting Differential Item Functioning in Graded Response Items: Multivariate Behavioral Research Vol 41(1) Jan 2006, 29-53. *Green, S. B. (1982). Establishing behavioral correlates: The MMPI as a case study: Applied Psychological Measurement Vol 6(2) Spr 1982, 219-224. *Green, S. B., & Babyak, M. A. (1997). Control of Type I errors with multiple tests of constraints in structural equation modeling: Multivariate Behavioral Research Vol 32(1) 1997, 39-51. *Green, S. B., Thompson, M. S., & Babyak, M. A. (1998). A Monte Carlo investigation of methods for controlling Type I errors with specification searches in structural equation modeling: Multivariate Behavioral Research Vol 33(3) 1998, 365-383. *Greer, T., & Dunlap, W. P. (1997). Analysis of variance with ipsative measures: Psychological Methods Vol 2(2) Jun 1997, 200-207. *Grima, A. M. (1988). An analysis of repeated measures data: An exploration of alternatives: Dissertation Abstracts International. *Grissom, R. J. (2000). Heterogeneity of variance in clinical data: Journal of Consulting and Clinical Psychology Vol 68(1) Feb 2000, 155-165. *Haber, M. (1990). Comments on "The test of homogeneity for 2x2 contingency tables: A review of and some personal opinions on the controversy" by G. Camilli: Psychological Bulletin Vol 108(1) Jul 1990, 146-149. *Hakstian, A. R., Osborne, J. W., & Skakun, E. N. (1974). Comparative assessment of multivariate association in psychological research: Psychological Bulletin Vol 81(12) Dec 1974, 1049-1052. *Hall, W., & Bird, K. D. (1985). The problem of multiple inference in psychiatric research: Australian and New Zealand Journal of Psychiatry Vol 19(3) Sep 1985, 265-274. *Hamilton, B. L. (1977). An empirical investigation of the effects of heterogeneous regression slopes in analysis of covariance: Educational and Psychological Measurement Vol 37(3) Fal 1977, 701-712. *Hancock, G. R. (1999). A sequential Scheffe-type respecification procedure for controlling Type I error in exploratory structural equation model modification: Structural Equation Modeling Vol 6(2) 1999, 158-168. *Hancock, G. R., Lawrence, F. R., & Nevitt, J. (2000). Type I error and power of latent mean methods and MANOVA in factorially invariant and noninvariant latent variable systems: Structural Equation Modeling Vol 7(4) 2000, 534-556. *Harwell, M. (1997). An empirical study of Hedge's homogeneity test: Psychological Methods Vol 2(2) Jun 1997, 219-231. *Harwell, M. R. (1991). Completely randomized factorial analysis of variance using ranks: British Journal of Mathematical and Statistical Psychology Vol 44(2) Nov 1991, 383-401. *Hawley, J. F. (1980). An empirical study of Type I and Type II error control of selected tests for related correlation coefficients: Dissertation Abstracts International. *Hayes, A. F., & Cai, L. (2007). Further evaluating the conditional decision rule for comparing two independent means: British Journal of Mathematical and Statistical Psychology Vol 60(2) Nov 2007, 217-244. *Headrick, T. C. (1997). Type I error and power of the rank transform analysis of covariance. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Hollingsworth, H. H. (1980). An analytical investigation of the effects of heterogeneous regression slopes in analysis of covariance: Educational and Psychological Measurement Vol 40(3) Fal 1980, 611-618. *Hsiung, T.-H. (1993). Type 1 error rate and power of pairwise multiple comparison procedures for main effects in an additive nonorthogonal two-factor design under heteroscedasticity: Dissertation Abstracts International. *Hsiung, T.-H., & Olejnik, S. (1994). Contrast analysis for additive non-orthogonal two-factor designs in unequal variance cases: British Journal of Mathematical and Statistical Psychology Vol 47(2) Nov 1994, 337-354. *Hsu, L. M. (1978). A Poisson method of controlling the maximum tolerable number of Type I errors: Perceptual and Motor Skills Vol 46(1) Feb 1978, 211-218. *Hubbard, R. (2004). Alphabet Soup Blurring the Distinctions Between p's and alpha 's in Psychological Research: Theory & Psychology Vol 14(3) Jun 2004, 295-327. *Hubble, L. M. (1984). Univariate analysis of multivariate outcomes in educational psychology: Contemporary Educational Psychology Vol 9(1) Jan 1984, 8-13. *Huberty, C. J., & Morris, J. D. (1989). Multivariate analysis versus multiple univariate analyses: Psychological Bulletin Vol 105(2) Mar 1989, 302-308. *Huberty, C. J., & Morris, J. D. (1992). Multivariate analysis versus multiple univariate analyses. Washington, DC: American Psychological Association. *Hudson, W. W., & Murphy, G. J. (1980). The non-linear relationship between marital satisfaction and stages of the family life cycle: An artifact of Type I errors? : Journal of Marriage & the Family Vol 42(2) May 1980, 263-267. *Huitema, B. E., McKean, J. W., & McKnight, S. (1999). Autocorrelation effects on least-squares intervention analysis of short time series: Educational and Psychological Measurement Vol 59(5) Oct 1999, 767-786. *Hulleman, J., & Humphreys, G. W. (2007). Maximizing the power of comparing single cases against a control sample: An argument, a program for making comparisons, and a worked example from the Pyramids and Palm Trees Test: Cognitive Neuropsychology Vol 24(3) May 2007, 279-291. *Huynh, H. (1978). Some approximate tests for repeated measurement designs: Psychometrika Vol 43(2) Jun 1978, 161-175. *Jacobs, K. W. (1976). A demonstration of alpha build-up resulting from repeated statistical tests: Southern Journal of Educational Research Vol 10(1) Win 1976, 23-27. *Jiang, H., & Stout, W. (1998). Improved type I error control and reduced estimation bias for DIF detection using SIBTEST: Journal of Educational and Behavioral Statistics Vol 23(4) Win 1998, 291-322. *Jodoin, M. G., & Gierl, M. J. (2001). Evaluating type I error and power rates using an effect size measure with the logistic regression procedure for DIF detection: Applied Measurement in Education Vol 14(4) Oct 2001, 329-349. *Kasuya, E. (2001). Mann-Whitney U test when variances are unequal: Animal Behaviour Vol 61(6) Jun 2001, 1247-1249. *Kellow, J. T. (2000). Misuse of multivariate analysis of variance in behavioral research: The fallacy of the "protected" F test: Perceptual and Motor Skills Vol 90(3,Pt1) Jun 2000, 917-926. *Keren, G., & Lewis, C. (1994). The two fallacies of gamblers: Type I and Type II: Organizational Behavior and Human Decision Processes Vol 60(1) Oct 1994, 75-89. *Keselman, H. J. (1974). Multiple testing and Type I errors: A reply in defense of multifactor designs: American Psychologist Vol 29(10) Oct 1974, 778-779. *Keselman, H. J. (1994). Stepwise and simultaneous multiple comparison procedures of repeated measures' means: Journal of Educational Statistics Vol 19(2) Sum 1994, 127-162. *Keselman, H. J., Algina, J., & Kowalchuk, R. K. (2002). A comparison of data analysis strategies for testing omnibus effects in higher-order repeated measures designs: Multivariate Behavioral Research Vol 37(3) Jul 2002, 331-357. *Keselman, H. J., Algina, J., Kowalchuk, R. K., & Wolfinger, R. D. (1999). A comparison of recent approaches to the analysis of repeated measurements: British Journal of Mathematical and Statistical Psychology Vol 52(1) May 1999, 63-78. *Keselman, H. J., Cribbie, R., & Holland, B. (1999). The pairwise multiple comparison multiplicity problem: An alternative approach to familywise and comparison wise Type I error control: Psychological Methods Vol 4(1) Mar 1999, 58-69. *Keselman, H. J., Cribbie, R., & Holland, B. (2002). Controlling the rate of Type I error over a large set of statistical tests: British Journal of Mathematical and Statistical Psychology Vol 55(1) May 2002, 27-40. *Keselman, H. J., Cribbie, R. A., & Wilcox, R. R. (2002). Pairwise multiple comparison tests when data are nonnormal: Educational and Psychological Measurement Vol 62(3) Jun 2002, 420-434. *Keselman, H. J., Games, P. A., & Rogan, J. C. (1979). Protecting the overall rate of Type I errors for pairwise comparisons with an omnibus test statistic: Psychological Bulletin Vol 86(4) Jul 1979, 884-888. *Keselman, H. J., Games, P. A., & Rogan, J. C. (1980). Type I and Type II errors in simultaneous and two-stage multiple comparison procedures: Psychological Bulletin Vol 88(2) Sep 1980, 356-358. *Keselman, H. J., Games, P. A., & Rogan, J. C. (1981). Correction to Keselman, Games, and Rogan: Psychological Bulletin Vol 90(1) Jul 1981, 20. *Keselman, H. J., & Keselman, J. C. (1987). Type I error control and the power to detect factorial effects: British Journal of Mathematical and Statistical Psychology Vol 40(2) Nov 1987, 196-208. *Keselman, H. J., & Keselman, J. C. (1988). Comparing repeated measures means in factorial designs: Psychophysiology Vol 25(5) Sep 1988, 612-618. *Keselman, H. J., & Keselman, J. C. (1988). Repeated measures multiple comparison procedures: Effects of violating multisample sphericity in unbalanced designs: Journal of Educational Statistics Vol 13(3) Fal 1988, 215-226. *Keselman, H. J., Keselman, J. C., & Games, P. A. (1991). Maximum familywise Type I error rate: The least significant difference, Newman-Keuls, and other multiple comparison procedures: Psychological Bulletin Vol 110(1) Jul 1991, 155-161. *Keselman, H. J., Keselman, J. C., & Lix, L. M. (1995). The analysis of repeated measurements: Univariate tests, multivariate tests, or both? : British Journal of Mathematical and Statistical Psychology Vol 48(2) Nov 1995, 319-338. *Keselman, H. J., Keselman, J. C., & Shaffer, J. P. (1991). Multiple pairwise comparisons of repeated measures means under violation of multisample sphericity: Psychological Bulletin Vol 110(1) Jul 1991, 162-170. *Keselman, H. J., Kowalchuk, R. K., Algina, J., Lix, L. M., & Wilcox, R. R. (2000). Testing treatment effects in repeated measures designs: Trimmed means and bootstrapping: British Journal of Mathematical and Statistical Psychology Vol 53(2) Nov 2000, 175-191. *Keselman, H. J., Kowalchuk, R. K., & Boik, R. J. (2000). An examination of the robustness of the empirical Bayes and other approaches for testing main and interaction effects in repeated measures designs: British Journal of Mathematical and Statistical Psychology Vol 53(1) May 2000, 51-67. *Keselman, H. J., Lix, L. M., & Kowalchuk, R. K. (1998). Multiple comparison procedures for trimmed means: Psychological Methods Vol 3(1) Mar 1998, 123-141. *Keselman, H. J., Othman, A. R., Wilcox, R. R., & Fradette, K. (2004). The New and Improved Two-Sample t Test: Psychological Science Vol 15(1) Jan 2004, 47-51. *Keselman, H. J., & Rogan, J. C. (1977). The Tukey multiple comparison test: 1953-1976: Psychological Bulletin Vol 84(5) Sep 1977, 1050-1056. *Keselman, H. J., Rogan, J. C., & Games, P. A. (1981). Robust tests of repeated measures means in educational and psychological research: Educational and Psychological Measurement Vol 41(1) Spr 1981, 163-173. *Keselman, H. J., & Toothaker, L. E. (1973). An empirical comparison of the Marascuilo and Normal Scores nonparametric tests and the Scheffe and Tukey Parametric Tests for Pairwise Comparisons: Proceedings of the Annual Convention of the American Psychological Association 1973, 15-16. *Keselman, H. J., & Toothaker, L. E. (1974). Comparison of Tukey's T-method and Scheffe's S-method for various numbers of all possible differences of averages contrasts under violation of assumptions: Educational and Psychological Measurement Vol 34(3) Fal 1974, 511-519. *Keselman, H. J., Wilcox, R. R., Lix, L. M., Algina, J., & Fradette, K. (2007). Adaptive robust estimation and testing: British Journal of Mathematical and Statistical Psychology Vol 60(2) Nov 2007, 267-293. *Keselman, J. C., & Keselman, H. J. (1987). Detecting treatment effects in educational research: Educational and Psychological Measurement Vol 47(4) Win 1987, 903-910. *Keselman, J. C., Lix, L. M., & Keselman, H. J. (1996). The analysis of repeated measurements: A quantitative research synthesis: British Journal of Mathematical and Statistical Psychology Vol 49(2) Nov 1996, 275-298. *Khedouri, C. E. (2005). Correcting distortion in global statistical tests with application to psychiatric rehabilitation. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Kiefer-O'Donnell, R. A. (1997). Determination and analysis of Type 1 error and power rates for single subject research: A Monte Carlo simulation. Dissertation Abstracts International Section A: Humanities and Social Sciences. *Kiesel, A., Miller, J., & Ulrich, R. (2007). Systematic biases and type I error accumulation in tests of the race model inequality: Behavior Research Methods Vol 39(3) Aug 2007, 539-551. *Kim, C. (1986). An empirical comparison of the power and the robustness of the two independent means t-test and the Mann-Whitney U-test for semantic differential and Likert type scale scores assuming a discretized normal distribution: Dissertation Abstracts International. *Kim, S.-H., & Cohen, A. S. (1998). Detection of differential item functioning under the graded response model with the likelihood ratio test: Applied Psychological Measurement Vol 22(4) Dec 1998, 345-355. *Kim, S.-H., Cohen, A. S., & Kim, H.-O. (1994). An investigation of Lord's procedure for the detection of differential item functioning: Applied Psychological Measurement Vol 18(3) Sep 1994, 217-228. *Klockars, A. J., & Beretvas, S. N. (2001). Analysis of covariance and randomized block design with heterogeneous slopes: Journal of Experimental Education Vol 69(4) Sum 2001, 393-410. *Klockars, A. J., & Hancock, G. R. (1994). Per-experiment error rates: The hidden costs of several multiple comparison procedures: Educational and Psychological Measurement Vol 54(2) Sum 1994, 292-298. *Koele, P. (1985). The simultaneous control of type I and type II errors in statistical hypothesis testing: Zeitschrift fur Sozialpsychologie Vol 16(1) 1985, 56-58. *Kolm, G. P. (1981). An empirical investigation of potential problems in the analysis of developmental data: Dissertation Abstracts International. *Kowalchuk, R. K., Keselman, H. J., & Algina, J. (2003). Repeated measures interaction test with aligned ranks: Multivariate Behavioral Research Vol 38(4) 2003, 433-461. *Krauskopf, C. J. (1991). Pattern analysis and statistical power: Psychological Assessment: A Journal of Consulting and Clinical Psychology Vol 3(2) Jun 1991, 261-264. *Krenz, C. (1989). The impact of skew on Type I error rates: Dissertation Abstracts International. *Krishnamoorthy, K., & Xia, Y. (2006). On selecting tests for equality of two normal mean vectors: Multivariate Behavioral Research Vol 41(4) 2006, 533-548. *Kromrey, J. D., & Dickinson, W. B. (1995). The use of an overall F test to control Type I error rates in factorial analyses of variance: Limitations and better strategies: Journal of Applied Behavioral Science Vol 31(1) Mar 1995, 51-64. *Kromrey, J. D., & Dickinson, W. B. (1996). Detecting unit of analysis problems in nested designs: Statistical power and Type I error rates of the F test for groups-within-treatments effects: Educational and Psychological Measurement Vol 56(2) Apr 1996, 215-231. *Kubinger, K. D. (2006). Editorial: Psychology Science Vol 48(4) 2006, 403-404. *Kurita, K. (1999). Robustness of the t test and power analysis for non-independence of observations: A verification of simulation results with actual data: Japanese Journal of Educational Psychology Vol 47(3) Sep 1999, 263-272. *Lall, V. F., & Levin, J. R. (2004). An empirical investigation of the statistical properties of generalized single-case randomization tests: Journal of School Psychology Vol 42(1) Jan-Feb 2004, 61-86. *LaLonde, S. M. (1988). Testing equality of means from a repeated measures sample with incomplete data: Dissertation Abstracts International. *Landa, B. K. (1981). Alternative solutions to the Behrens-Fisher problem: An emperical study of Type I and Type II errors: Dissertation Abstracts International. *Larrabee, M. J. (1982). Reexamination of a plea for multivariate analyses: Journal of Counseling Psychology Vol 29(2) Mar 1982, 180-188. *Lashley, B. R., & Bond, C. F., Jr. (1997). Significance testing for round robin data: Psychological Methods Vol 2(3) Sep 1997, 278-291. *Leary, M. R., & Altmaier, E. M. (1980). Type I error in counseling research: A plea for multivariate analyses: Journal of Counseling Psychology Vol 27(6) Nov 1980, 611-615. *Lee, C.-H. (2007). A Monte Carlo study of two nonparametric statistics with comparisons of type I error rates and power. Dissertation Abstracts International Section A: Humanities and Social Sciences. *Lei, P.-W., Chen, S.-Y., & Yu, L. (2006). Comparing methods of assessing differential item functioning in a computerized adaptive testing environment: Journal of Educational Measurement Vol 44(3) Sep 2006, 245-264. *Lemire, S. D. (2006). An investigation of Type I error rate control for independent variable subset tests with a binary dependent variable using ordinary least squares, logistic regression analysis, and nonparametric regression. Dissertation Abstracts International Section A: Humanities and Social Sciences. *Leon, A. C. (2004). Multiplicity-Adjusted Sample Size Requirements: A Strategy to Maintain Statistical Power With Bonferroni Adjustments: Journal of Clinical Psychiatry Vol 65(11) Nov 2004, 1511-1514. *Liou, M. (1993). Exact person tests for assessing model-data fit in the Rasch model: Applied Psychological Measurement Vol 17(2) Jun 1993, 187-195. *Lissitz, R. W., & Chardos, S. (1975). A study of the effect of the violation of the assumption of independent sampling upon the Type I error rate of the two-group t-test: Educational and Psychological Measurement Vol 35(2) Sum 1975, 353-359. *Livacic-Rojas, P., Vallejo, G., & Fernandez, P. (2006). Alternative statistical procedures to assess the robustness using repeated measures designs: Revista Latinoamericana de Psicologia Vol 38(3) 2006, 579-598. *Lix, L. M., & Fouladi, R. T. (2007). Robust step-down tests for multivariate independent group designs: British Journal of Mathematical and Statistical Psychology Vol 60(2) Nov 2007, 245-265. *Lix, L. M., & Keselman, H. J. (1998). To trim or not to trim: Tests of location equality under heteroscedasticity and nonnormality: Educational and Psychological Measurement Vol 58(3) Jun 1998, 409-429. *Lix, L. M., & Keselman, H. J. (1998). "To trim or not to trim: Tests of location equality under heteroscedasticity and nonnormality": Errata: Educational and Psychological Measurement Vol 58(5) Oct 1998, 853. *Long, J. D. (1999). A confidence interval for ordinal multiple regression weights: Psychological Methods Vol 4(3) Sep 1999, 315-330. *Longman, R. S. (2004). Values for Comparison of WAIS-III Index Scores With Overall Means: Psychological Assessment Vol 16(3) Sep 2004, 323-325. *Luh, W.-M., & Guo, J.-H. (2001). Transformation works for non-normality? On one-sample transformation trimmed t methods: British Journal of Mathematical and Statistical Psychology Vol 54(2) Nov 2001, 227-236. *Luh, W.-M., & Guo, J.-H. (2001). Using Johnson's transformation and robust estimators with heteroscedastic test statistics: An examination of the effects of non-normality and heterogeneity in the non-orthogonal two-way ANOVA design: British Journal of Mathematical and Statistical Psychology Vol 54(1) May 2001, 79-94. *MacDonald, P. L., & Gardner, R. C. (2000). Type I error rate comparisons of post hoc procedures for IxJ chi-square tables: Educational and Psychological Measurement Vol 60(5) Oct 2000, 735-754. *MacKinnon, D. P., Fritz, M. S., Williams, J., & Lockwood, C. M. (2007). Distribution of the product confidence limits for the indirect effect: Program PRODLIN: Behavior Research Methods Vol 39(3) Aug 2007, 384-389. *MacKinnon, D. P., Lockwood, C. M., Hoffman, J. M., West, S. G., & Sheets, V. (2002). A comparison of methods to test mediation and other intervening variable effects: Psychological Methods Vol 7(1) Mar 2002, 83-104. *MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods: Multivariate Behavioral Research Vol 39(1) Jan 2004, 99-128. *Manzano, V. (1997). Uses and abuses of type I errors: Psicologica Vol 18(2) 1997, 153-169. *Marascuilo, L. A. (1966). Large-sample multiple comparisons: Psychological Bulletin Vol 65(5) May 1966, 280-290. *Maxwell, S. E. (1980). Pairwise multiple comparisons in repeated measures designs: Journal of Educational Statistics Vol 5(3) Fal 1980, 269-287. *Maxwell, S. E., & Bray, J. H. (1986). Robustness of the Quasi F statistic to violations of sphericity: Psychological Bulletin Vol 99(3) May 1986, 416-421. *Maxwell, S. E., & Delaney, H. D. (1993). Bivariate median splits and spurious statistical significance: Psychological Bulletin Vol 113(1) Jan 1993, 181-190. *McArdle, J. J. (1977). An applied Monte Carlo examination of Type I behavior in univariate and multivariate strategies for repeated measures hypotheses: Dissertation Abstracts International. *McAweeney, M. J., & Klockars, A. J. (1998). Maximizing power in skewed distributions: Analysis and assignment: Psychological Methods Vol 3(1) Mar 1998, 117-122. *McCarroll, D., Crays, N., & Dunlap, W. P. (1992). Sequential ANOVAs and Type I error rates: Educational and Psychological Measurement Vol 52(2) Sum 1992, 387-393. *Meca, J. S., & Martinez, F. M. (1997). Meta-analysis of Monte Carlo simulations: Revista de Psicologia Universitas Tarraconensis Vol 19(1) 1997, 29-51. *Mendro, R. L. (1973). An empirical study of the accuracy of two approximations to the T-method of multiple comparisons after the analysis of covariance: Dissertation Abstracts International Vol. *Miller, J. (2006). A likelihood ratio test for mixture effects: Behavior Research Methods Vol 38(1) Feb 2006, 92-106. *Milligan, G. W. (1980). Factors that affect Type I and Type II error rates in the analysis of multidimensional contingency tables: Psychological Bulletin Vol 87(2) Mar 1980, 238-244. *Milligan, G. W. (1987). The use of the arc-sine transformation in the analysis of variance: Educational and Psychological Measurement Vol 47(3) Fal 1987, 563-573. *Monahan, P. O., & Ankenmann, R. D. (2005). Effect of Unequal Variances in Proficiency Distributions on type-1 Error of the Mantel-Haenszel Chi-square Test for Differential Item Functioning: Journal of Educational Measurement Vol 42(2) Sum 2005, 101-131. *Morgan-Lopez, A. A. (2004). A simulation study of the mediated baseline by treatment interaction effect in preventive intervention trials. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Narayanan, P., & Swaminathan, H. (1994). Performance of the Mantel-Haenszel and simultaneous item bias procedures for detecting differential item functioning: Applied Psychological Measurement Vol 18(4) Dec 1994, 315-328. *Narayanan, P., & Swaminathan, H. (1996). Identification of items that show nonuniform DIF: Applied Psychological Measurement Vol 20(3) Sep 1996, 257-274. *Neuhauser, M. (2004). Wilcoxon test after Levene's transformation can have an inflated type I error rate: Psychological Reports Vol 94(3,Pt2) Jun 2004, 1419-1420. *Nevitt, J., & Hancock, G. R. (2004). Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling: Multivariate Behavioral Research Vol 39(3) 2004, 439-478. *Nthangeni, M., & Algina, J. (2001). Type I error rate and power of some alternative methods to the independent samples t test: Educational and Psychological Measurement Vol 61(6) Dec 2001, 937-957. *O'Keefe, D. J. (2003). Colloquy: Should Familywise Alpha Be Adjusted?: Against Familywise Alpha Adjustment: Human Communication Research Vol 29(3) Jul 2003, 431-447. *Olejnik, S. (1987). Conditional ANOVA for mean differences when population variances are unknown: Journal of Experimental Education Vol 55(3) Spr 1987, 141-148. *Olejnik, S., Li, J., Supattathum, S., & Huberty, C. J. (1997). Multiple testing and statistical power with modified Bonferroni procedures: Journal of Educational and Behavioral Statistics Vol 22(4) Win 1997, 389-406. *Olejnik, S. F., & Algina, J. (1988). Tests of variance equality when distributions differ in form and location: Educational and Psychological Measurement Vol 48(2) Sum 1988, 317-329. *Oshima, T. C., & Algina, J. (1992). Type I error rates for James's second-order test and Wilcox's H-sub(m ) test under heteroscedasticity and non-normality: British Journal of Mathematical and Statistical Psychology Vol 45(2) Nov 1992, 255-263. *Othman, A. R., Keselman, H. J., Padmanabhan, A. R., Wilcox, R. R., & Fradette, K. (2004). Comparing measures of the 'typical' score across treatment groups: British Journal of Mathematical and Statistical Psychology Vol 57(2) Nov 2004, 215-234. *Ottenbacher, K. J. (1986). A quantitative analysis of experimentwise error rates in applied behavioral science research: Journal of Applied Behavioral Science Vol 22(4) 1986, 495-501. *Ottenbacher, K. J. (1991). Statistical conclusion validity: An empirical analysis of multiplicity in mental retardation research: American Journal on Mental Retardation Vol 95(4) Jan 1991, 421-427. *Overall, J. E., Atlas, R. S., & Gibson, J. M. (1995). Power of a test that is robust against variance hetergeneity: Psychological Reports Vol 77(1) Aug 1995, 155-159. *Overall, J. E., & Hornick, C. W. (1982). An evaluation of power and sample-size requirements for the continuity-corrected Fisher exact test: Perceptual and Motor Skills Vol 54(1) Feb 1982, 83-86. *Overall, J. E., Rhoades, H. M., & Starbuck, R. R. (1987). Small-sample tests for homogeneity of response probabilities in 2x2 contingency tables: Psychological Bulletin Vol 102(2) Sep 1987, 307-314. *Overall, J. E., & Shivakumar, C. (1999). Testing differences in response trends across a normalized time domain: Journal of Clinical Psychology Vol 55(7) Jul 1999, 857-867. *Paunonen, S. V., & Jackson, D. N. (1988). Type I error rates for moderated multiple regression analysis: Journal of Applied Psychology Vol 73(3) Aug 1988, 569-573. *Pavur, R., & Nath, R. (1989). Power and Type I error rates for rank-score MANOVA techniques: Multivariate Behavioral Research Vol 24(4) Oct 1989, 477-501. *Penfield, D. A. (1994). Choosing a two-sample location test: Journal of Experimental Education Vol 62(4) Sum 1994, 343-360. *Penfield, R. D. (2003). Applying the Breslow-Day Test of Trend in Odds Ratio Heterogeneity to the Analysis of Nonuniform DIP: Alberta Journal of Educational Research Vol 49(3) Fal 2003, 231-243. *Pollard, P. (1993). How significant is "significance?" Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc. *Pollard, P., & Richardson, J. T. (1987). On the probability of making Type I errors: Psychological Bulletin Vol 102(1) Jul 1987, 159-163. *Powell, D. A., & Shafer, W. D. (2001). The robustness of the likelihood ratio chi-square test for structural equation models: A meta-analysis: Journal of Educational and Behavioral Statistics Vol 26(1) Spr 2001, 105-132. *Rae, G. (1982). A Monte Carlo comparison of small sample procedures for testing the hypothesis that two variables measure the same trait except for errors of measurement: British Journal of Mathematical and Statistical Psychology Vol 35(2) Nov 1982, 228-232. *Rae, G. (1997). Sampling behaviour of kappa and weighted kappa in the null case: British Journal of Mathematical and Statistical Psychology Vol 50(1) May 1997, 1-7. *Ramsey, P. H. (1980). Exact Type I error rates for robustness of student's t test with unequal variances: Journal of Educational Statistics Vol 5(4) Win 1980, 337-349. *Ramsey, P. H. (1981). Power of univariate pairwise multiple comparison procedures: Psychological Bulletin Vol 90(2) Sep 1981, 352-366. *Ramsey, P. H., & Ramsey, P. P. (1988). Evaluating the normal approximation to the binomial test: Journal of Educational Statistics Vol 13(2) Sum 1988, 173-182. *Ramsey, P. P. (1978). An empirical investigation of Type I error under conditions of heterogeneity of variance and sample size: The one way fixed effects model: Dissertation Abstracts International. *Rasmussen, J. L. (1986). An evaluation of parametric and non-parametric tests on modified and non-modified data: British Journal of Mathematical and Statistical Psychology Vol 39(2) Nov 1986, 213-220. *Rasmussen, J. L. (1987). Estimating correlation coefficients: Bootstrap and parametric approaches: Psychological Bulletin Vol 101(1) Jan 1987, 136-139. *Rasmussen, J. L. (1987). Parametric and bootstrap approaches to repeated measures designs: Behavior Research Methods, Instruments & Computers Vol 19(4) Aug 1987, 357-360. *Rasmussen, J. L. (1988). Evaluation of small-sample statistics that test whether variables measure the same trait: Applied Psychological Measurement Vol 12(2) Jun 1988, 177-187. *Rasmussen, J. L. (1989). Analysis of Likert-scale data: A reinterpretation of Gregoire and Driver: Psychological Bulletin Vol 105(1) Jan 1989, 167-170. *Rasmussen, J. L. (1989). Data transformation, Type I error rate and power: British Journal of Mathematical and Statistical Psychology Vol 42(2) Nov 1989, 203-213. *Rasmussen, J. L. (1989). A Monte Carlo evaluation of Bobko's ordinal interaction analysis technique: Journal of Applied Psychology Vol 74(2) Apr 1989, 242-246. *Rasmussen, J. L. (1989). Parametric and non-parametric analysis of groups by trials design under variance-covariance inhomogeneity: British Journal of Mathematical and Statistical Psychology Vol 42(1) May 1989, 91-102. *Rasmussen, J. L. (1991). Data transformation and absenteeism: Methodika Vol 5 1991, 47-62. *Rasmussen, J. L., Heumann, K. A., Heumann, M. T., & Botzum, M. (1989). Univariate and multivariate groups by trials analysis under violation of variance-covariance and normality assumptions: Multivariate Behavioral Research Vol 24(1) Jan 1989, 93-105. *Rasmussen, J. L., & Loher, B. T. (1988). Appropriate critical percentages for the Schmidt and Hunter meta-analysis procedure: Comparative evaluation of Type I error rate and power: Journal of Applied Psychology Vol 73(4) Nov 1988, 683-687. *Raykov, T. (1997). Scale reliability, Cronbach's Coefficient Alpha, and violations of essential tau-equivalence with fixed congeneric components: Multivariate Behavioral Research Vol 32(4) 1997, 329-353. *Reddon, J. R. (1987). Fisher's tanh-super(-2 ) transformation of the correlation coefficient and a test for complete independence in a multivariate normal population: Journal of Educational Statistics Vol 12(3) Fal 1987, 294-300. *Reddon, J. R., Jackson, D. N., & Schopflocher, D. (1985). Distribution of the determinant of the sample correlation matrix: Monte Carlo type one error rates: Journal of Educational Statistics Vol 10(4) Win 1985, 384-388. *Refinetti, R. (1996). Demonstrating the consequences of violations of assumptions in between-subjects analysis of variance: Teaching of Psychology Vol 23(1) Feb 1996, 51-54. *Renner, B. R., & Ball, D. W. (1983). The effects of unequal covariances on the Tukey WSD test: Educational and Psychological Measurement Vol 43(1) Spr 1983, 27-34. *Rheinheimer, D. C. (1999). The effects on Type I error rate and power of the ANCOVA F-test and selected alternatives under non-normality and variance heterogeneity. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Rheinheimer, D. C., & Penfield, D. A. (2001). The effects of Type I error rate and power of ANCOVA F test and selected alternatives under nonnormality and variance heterogeneity: Journal of Experimental Education Vol 69(4) Sum 2001, 373-391. *Rhoades, H. M., & Overall, J. E. (1982). A sample size correction for Pearson chi-square in 2x2 contingency tables: Psychological Bulletin Vol 91(2) Mar 1982, 418-423. *Robey, R. R. (1985). A Monte Carlo investigation of Type I error in the analysis of variance for the single group repeated measures design with multiple measures per occasion: Dissertation Abstracts International. *Robey, R. R., & Barcikowski, R. S. (1992). Type I error and the number of iterations in Monte Carlo studies of robustness: British Journal of Mathematical and Statistical Psychology Vol 45(2) Nov 1992, 283-288. *Rogan, J. C. (1978). A comparison of univariate and multivariate analysis strategies for repeated measures designs: Dissertation Abstracts International. *Rogan, J. C., & Keselman, H. J. (1977). Is the ANOVA F-test robust to variance heterogeneity when sample sizes are equal? An investigation via a coefficient to variation: American Educational Research Journal Vol 14(4) Fal 1977, 493-498. *Rogan, J. C., Keselman, H. J., & Mendoza, J. L. (1979). Analysis of repeated measurements: British Journal of Mathematical and Statistical Psychology Vol 32(2) Nov 1979, 269-286. *Rogers, R. L. (1973). Category width and decision making in perception: Perceptual and Motor Skills Vol 37(2) Oct 1973, 647-652. *Rosenthal, R. (1979). The file drawer problem and tolerance for null results: Psychological Bulletin Vol 86(3) May 1979, 638-641. *Rosenthal, R., & Rubin, D. B. (1984). Multiple contrasts and ordered Bonferroni procedures: Journal of Educational Psychology Vol 76(6) Dec 1984, 1028-1034. *Roussos, L. A., & Stout, W. F. (1996). Simulation studies of the effects of small sample size and studied item parameters on SIBTEST and Mantel-Haenszel Type I error performance: Journal of Educational Measurement Vol 33(2) Sum 1996, 215-230. *Russell, S. S. (2005). Estimates of Type I error and power for indices of differential bundle and test functioning. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Ryan, T. A. (1985). Comments on: "Multiple comparison procedures within experimental research" by Caroline Davis and John Gaito: Canadian Psychology/Psychologie Canadienne Vol 26(1) Jan 1985, 75-78. *Ryan, T. A. (1985). "Ensemble-adjusted p values": How are they to be weighted? : Psychological Bulletin Vol 97(3) May 1985, 521-526. *Sanchez-Meca, J., & Marin-Martinez, F. (1998). Testing continuous moderators in meta-analysis: A comparison of procedures: British Journal of Mathematical and Statistical Psychology Vol 51(2) Nov 1998, 311-326. *Sato, T. (1996). Type I and Type II error in multiple comparisons: Journal of Psychology: Interdisciplinary and Applied Vol 130(3) May 1996, 293-302. *Sawilowsky, S. S., & Blair, R. C. (1992). A more realistic look at the robustness and Type II error properties of the t test to departures from population normality: Psychological Bulletin Vol 111(2) Mar 1992, 352-360. *Sawilowsky, S. S., Blair, R. C., & Higgins, J. J. (1989). An investigation of the Type I error and power properties of the rank transform procedure in factorial ANOVA: Journal of Educational Statistics Vol 14(3) Fal 1989, 255-267. *Sawilowsky, S. S., & Hillman, S. B. (1992). Power of the independent samples t test under a prevalent psychometric measure distribution: Journal of Consulting and Clinical Psychology Vol 60(2) Apr 1992, 240-243. *Schafer, W. D., & Dayton, C. M. (1981). Techniques for simultaneous inference: Personnel & Guidance Journal Vol 59(10) Jun 1981, 631-636. *Schooler, L. J., & Shiffrin, R. M. (2005). Efficiently measuring recognition performance with sparse data: Behavior Research Methods Vol 37(1) Feb 2005, 3-10. *Schuster, C., & von Eye, A. (2000). Using Log-Linear Modeling to increase power in two-sample Configural Frequency Analysis: Psychologische Beitrage Vol 42(3) 2000, 273-284. *Seaman, M. A., Levin, J. R., & Serlin, R. C. (1991). New developments in pairwise multiple comparisons: Some powerful and practicable procedures: Psychological Bulletin Vol 110(3) Nov 1991, 577-586. *Seaman, S. L., Algina, J., & Olejnik, S. F. (1985). Type I error probabilities and power of the rank and parametric ANCOVA procedures: Journal of Educational Statistics Vol 10(4) Win 1985, 345-367. *Seco, G. V., Fuente, I. M. d. l., & Garcia, P. F. (1999). Multiple comparison procedures for simple one-way ANOVA with dependent data: The Spanish Journal of Psychology Vol 2(1) May 1999, 55-63. *Seco, G. V., Gras, J. A., & Garcia, M. A. (2007). Comparative robustness of recent methods for analyzing multivariate repeated measures designs: Educational and Psychological Measurement Vol 67(3) Jun 2007, 410-432. *Serlin, R. C. (2000). Testing for robustness in Monte Carlo studies: Psychological Methods Vol 5(2) Jun 2000, 230-240. *Shaffer, J. P. (2002). Multiplicity, directional (Type III) errors, and the Null Hypothesis: Psychological Methods Vol 7(3) Sep 2002, 356-369. *Sheehan, J. J., & Drury, C. G. (1971). The analysis of industrial inspection: Applied Ergonomics Vol 2(2) Jun 1971, 74-78. *Sheehan-Holt, J. K. (1998). MANOVA simultaneous test procedures: The power and robustness of restricted multivariate contrasts: Educational and Psychological Measurement Vol 58(6) Dec 1998, 861-881. *Shine, L. C. (1979). The conservativeness and power of F tests based on the Shine-Bower error term: Educational and Psychological Measurement Vol 39(3) Fal 1979, 537-541. *Shine, L. C. (1980). The fallacy of replacing an a priori significance level with an a posteriori significance level: Educational and Psychological Measurement Vol 40(2) Sum 1980, 331-335. *Sierra, V., Quera, V., & Solanas, A. (2000). Autocorrelation effect on type I error rate of Revusky's R-sub(n ) test: A Monte Carlo study: Psicologica Vol 21(1-2) 2000, 91-114. *Sierra, V., Solanas, A., & Quera, V. (2005). Randomization Tests for Systematic Single-Case Designs Are Not Always Appropriate: Journal of Experimental Education Vol 73(2) Win 2005, 140-160. *Silver, N. C., & Dunlap, W. P. (1989). A Monte Carlo study of testing the significance of correlation matrices: Educational and Psychological Measurement Vol 49(3) Fal 1989, 563-569. *Silver, N. C., Hittner, J. B., & May, K. (2004). Testing Dependent Correlations With Nonoverlapping Variables: A Monte Carlo Simulation: Journal of Experimental Education Vol 73(1) Fal 2004, 53-69. *Silverstein, A. B. (1993). Type I, Type II, and other types of errors in pattern analysis: Psychological Assessment Vol 5(1) Mar 1993, 72-74. *Sinharay, S. (2006). Bayesian item fit analysis for unidimensional item response theory models: British Journal of Mathematical and Statistical Psychology Vol 59(2) Nov 2006, 429-449. *Smith, R. A., Levine, T. R., Lachlan, K. A., & Fediuk, T. A. (2002). The high cost of complexity in experimental design and data analysis: Type I and Type II error rates in multiway ANOVA: Human Communication Research Vol 28(4) Oct 2002, 515-530. *Smith, R. M. (1994). Detecting item bias in the Rasch rating scale model: Educational and Psychological Measurement Vol 54(4) Win 1994, 886-896. *Sotaridona, L. S., & Meijer, R. R. (2002). Statistical properties of the K-index for detecting answer copying: Journal of Educational Measurement Vol 39(2) Sum 2002, 115-132. *Spector, P. E., & Levine, E. L. (1987). Meta-analysis for integrating study outcomes: A Monte Carlo study of its susceptibility to Type I and Type II errors: Journal of Applied Psychology Vol 72(1) Feb 1987, 3-9. *Spector, P. E., Voissem, N. H., & Cone, W. L. (1981). A Monte Carlo study of three approaches to nonorthogonal analysis of variance: Journal of Applied Psychology Vol 66(5) Oct 1981, 535-540. *Steiger, J. H. (1980). Testing pattern hypotheses on correlation matrices: Alternative statistics and some empirical results: Multivariate Behavioral Research Vol 15(3) Jul 1980, 335-352. *Steinfatt, T. M. (1990). Ritual versus logic in significance testing in communication research: Communication Research Reports Vol 7(2) Dec 1990, 90-93. *Stelzl, I. (1985). A deficient evaluation of single-case data: A criticism of Petermann's suggestions for the use of DEL analysis as nonparametric time series analysis: Zeitschrift fur Klinische Psychologie Vol 14(2) 1985, 145-152. *Stevens, J. (1979). Comment on Olson: Choosing a test statistic in multivariate analysis of variance: Psychological Bulletin Vol 86(2) Mar 1979, 355-360. *Stone, C. A. (2003). Empirical power and Type I error rates for an IRT fit statistic that considers the precision of ability estimates: Educational and Psychological Measurement Vol 63(4) Aug 2003, 566-583. *Stone, C. A., & Zhang, B. (2003). Assessing goodness of fit of item response theory models: A comparison of traditional and alternative procedures: Journal of Educational Measurement Vol 40(4) Win 2003, 331-352. *Strahan, R. F. (1982). Multivariate analysis and the problem of Type I error: Journal of Counseling Psychology Vol 29(2) Mar 1982, 175-179. *Streiner, D. L. (1993). An introduction to multivariate statistics: The Canadian Journal of Psychiatry / La Revue canadienne de psychiatrie Vol 38(1) Feb 1993, 9-13. *Sturman, M. C. (1999). Multiple approaches to analyzing count data in studies of individual differences: The propensity for Type I errors, illustrated with the case of absenteeism prediction: Educational and Psychological Measurement Vol 59(3) Jun 1999, 414-430. *Tachibana, T. (1985). Litter effects in open-field behavior and distortion of Type I error rate with individual animals as the unit of statistical analysis: Psychological Reports Vol 57(1) Aug 1985, 87-90. *Tai, S.-y. W., & Pohl, N. F. (1979). CHI-B: An interactive BASIC program for analyzing the power of chi-square tests: Behavior Research Methods & Instrumentation Vol 11(3) Jun 1979, 404. *Tang, K. L., & Algina, J. (1993). Performance of four multivariate tests under variance-covariance heteroscedasticity: Multivariate Behavioral Research Vol 28(4) 1993, 391-405. *Thissen, D., Steinberg, L., & Kuang, D. (2002). Quick and easy implementaion of the Benjamini-Hochberg procedure for controlling the false positive rate in multiple comparisons: Journal of Educational and Behavioral Statistics Vol 27(1) Spr 2002, 77-83. *Thomas, D. R., & Decady, Y. J. (2004). Testing for Association Using Multiple Response Survey Data: Approximate Procedures Based on the Rao-Scott Approach: International Journal of Testing Vol 4(1) 2004, 43-59. *Thornton, B. W. (1976). An empirical investigation of four normative methods of treating ipsative data: Dissertation Abstracts International. *Thye, S. R. (2000). Reliability in experimental psychology: Social Forces Vol 78(4) Jun 2000, 1277-1309. *Tollenaar, N., & Mooijaart, A. (2003). Type I errors and power of the parametric bootstrap goodness-of-fit test: Full and limited information: British Journal of Mathematical and Statistical Psychology Vol 56(2) Nov 2003, 271-288. *Tomarken, A. J., & Serlin, R. C. (1986). Comparison of ANOVA alternatives under variance heterogeneity and specific noncentrality structures: Psychological Bulletin Vol 99(1) Jan 1986, 90-99. *Toothaker, L. E., & Malick, C. (1975). On "the statistic with the smaller critical value." Psychological Bulletin Vol 82(4) Jul 1975, 541-542. *Tubsaeng, W. (1986). An empirical comparison of the Pearson-!X-2 and Kolmogorov-Smirnov goodness-of-fit tests: Dissertation Abstracts International. *Tutzauer, F. (2003). On the Sensible Application of Familywise Alpha Adjustment: Human Communication Research Vol 29(3) Jul 2003, 455-463. *Uttaro, T., & Millsap, R. E. (1994). Factors influencing the Mantel-Haenszel procedure in the detection of differential item functioning: Applied Psychological Measurement Vol 18(1) Mar 1994, 15-25. *Vallejo, G., Arnau, J., Bono, R., Cuesta, M., Fernandez, P., & Herrero, J. (2002). Analysis of trans-sectional short time-series designs by means of parametric and nonparametric procedures: Metodologia de las Ciencias del Comportamiento Vol 4(2) 2002, 301-323. *Vallejo, G., & Ato, M. (2006). Modified Brown-Forsythe Procedure for testing interaction effects in split-plot designs: Multivariate Behavioral Research Vol 41(4) 2006, 549-578. *Vallejo, G., & Livacic-Rojas, P. (2005). Comparison of Two Procedures for Analyzing Small Sets of Repeated Measures Data: Multivariate Behavioral Research Vol 40(2) Apr 2005, 179-205. *Vallejo, G., & Menendez, I. (1998). The effects of dependence among the observations in several multiple comparison procedures: Psicologica Vol 19(1) 1998, 53-71. *Van Breukelen, G. J. P., & Van Dijk, K. R. A. (2007). Use of covariates in randomized controlled trials: Journal of the International Neuropsychological Society Vol 13(5) Sep 2007, 903-904. *Velicer, W. F., Peacock, A. C., & Jackson, D. N. (1982). A comparison of component and factor patterns: A Monte Carlo approach: Multivariate Behavioral Research Vol 17(3) Jul 1982, 371-388. *Vergoz-Rekis, C. (1986). Nonorthogonal factors in the analysis of variance: A Monte Carlo study of Type I error rates when factors are correlated: Dissertation Abstracts International. *Viechtbauer, W. (2007). Hypothesis tests for population heterogeneity in meta-analysis: British Journal of Mathematical and Statistical Psychology Vol 60(1) May 2007, 29-60. *Von Weber, S. (2000). A comparison of tests used in the CFA by simulation: Psychologische Beitrage Vol 42(3) 2000, 260-272. *Vuotto, G. P. (1979). A comparison of Type I error rates for selected post hoc techniques in a one-way MANOVA design: Dissertation Abstracts International. *Walters, E., Markley, R. P., & Tiffany, D. W. (1975). Lunacy: A Type I error? : Journal of Abnormal Psychology Vol 84(6) Dec 1975, 715-717. *Wang, W.-C., & Yeh, Y.-L. (2003). Effects of Anchor Item Methods on Differential Item Functioning Detection With the Likelihood Ratio Test: Applied Psychological Measurement Vol 27(6) Nov 2003, 479-498. *Watts, T. M. (1979). Indices of cheating on multiple-choice tests: Simulation and evaluation: Dissertation Abstracts International. *Wells, C. S., & Bolt, D. M. (2008). Investigation of a nonparametric procedure for assessing goodness-of-fit in item response theory: Applied Measurement in Education Vol 21(1) Jan-Mar 2008, 22-40. *Westermann, R., & Hager, W. (1986). Error probabilities in educational and psychological research: Journal of Educational Statistics Vol 11(2) Sum 1986, 117-146. *Whitney, D. R., & Feldt, L. S. (1973). Analyzing questionnaire results: Multiple tests of hypotheses and multivariate hypotheses: Educational and Psychological Measurement Vol 33(2) Sum 1973, 365-380. *Wike, E. L., & Church, J. D. (1977). Further comments on nonparametric multiple-comparison tests: Perceptual and Motor Skills Vol 45(3, Pt 1) Dec 1977, 917-918. *Wilcox, R. R. (1987). A heteroscedastic ANOVA procedure with specified power: Journal of Educational Statistics Vol 12(3) Fal 1987, 271-281. *Wilcox, R. R. (1991). Bootstrap inferences about the correlation and variances of paired data: British Journal of Mathematical and Statistical Psychology Vol 44(2) Nov 1991, 379-382. *Wilcox, R. R. (2001). Pairwise comparisons of trimmed means for two or more groups: Psychometrika Vol 66(3) Sep 2001, 343-356. *Wilcox, R. R. (2003). Multiple hypothesis testing based on the ordinary least squares progression estimator when there is heteroscedasticity: Educational and Psychological Measurement Vol 63(5) Oct 2003, 758-764. *Wilcox, R. R. (2004). A multivariate projection-type analogue of the Wilcoxon-Mann-Whitney test: British Journal of Mathematical and Statistical Psychology Vol 57(2) Nov 2004, 205-213. *Wilcox, R. R. (2004). Some results on extensions and modifications of the Theil-Sen regression estimator: British Journal of Mathematical and Statistical Psychology Vol 57(2) Nov 2004, 265-280. *Wilcox, R. R. (2006). Testing the Hypothesis of a Homoscedastic Error Term in Simple, Nonparametric Regression: Educational and Psychological Measurement Vol 66(1) Feb 2006, 85-92. *Wilcox, R. R., & Keselman, H. J. (2003). Repeated measures one-way ANOVA based on a modified one-step M-estimator: British Journal of Mathematical and Statistical Psychology Vol 56(1) May 2003, 15-25. *Wilcox, R. R., Keselman, H. J., & Kowalchuk, R. K. (1998). Can tests for treatment group equality be improved?: The bootstrap and trimmed means conjecture: British Journal of Mathematical and Statistical Psychology Vol 51(1) May 1998, 123-134. *Wilcox, R. R., & Muska, J. (2001). Inferences about correlations when there is heteroscedasticity: British Journal of Mathematical and Statistical Psychology Vol 54(1) May 2001, 39-47. *Wilkerson, M., & Olson, M. R. (1997). Misconceptions about sample size, statistical significance, and treatment effect: Journal of Psychology: Interdisciplinary and Applied Vol 131(6) Nov 1997, 627-631. *Wollack, J. A. (2003). Comparison of answer copying indices with real data: Journal of Educational Measurement Vol 40(3) Fal 2003, 189-205. *Wollack, J. A., & Cohen, A. S. (1998). Detection of answer coping with unknown item and trait parameters: Applied Psychological Measurement Vol 22(2) Jun 1998, 144-152. *Wollack, J. A., Cohen, A. S., & Serlin, R. C. (2001). Defining error rates and power for detecting answer answer copying: Applied Psychological Measurement Vol 25(4) Dec 2001, 385-404. *Yoder, P. J., & Tapp, J. (2004). Empirical Guidance for Time-Window Sequential Analysis of Single Cases: Journal of Behavioral Education Vol 13(4) Dec 2004, 227-246. *Yu, W.-C. (1995). Correlated errors in fixed-effects analysis of variance. Dissertation Abstracts International: Section B: The Sciences and Engineering. *Zenhausern, R. (1974). Damn lies or statistics? : Journal of the American Society for Psychical Research Vol 68(3) 1974, 281-296. *Zentall, T. R., & Singer, R. A. (2007). Within-trial contrast: When is a failure to replicate not a Type I error? : Journal of the Experimental Analysis of Behavior Vol 87(3) May 2007, 401-404. *Zhou, D. X. (2004). A Type I error investigation of modified Scheffe-based multiple-comparison procedures in factorial ANOVA, MANOVA, and multiple-regression situations. Dissertation Abstracts International Section A: Humanities and Social Sciences. *Zimmerman, D. W. (1994). A note on the F test for equal variances under violation of random sampling: Journal of General Psychology Vol 121(1) Jan 1994, 77-83. *Zimmerman, D. W. (1994). A note on the influence of outliers on Parametric and Nonparametric tests: Journal of General Psychology Vol 121(4) Oct 1994, 391-401. *Zimmerman, D. W. (1996). Some properties of preliminary tests of equality of variances in the two-sample location problem: Journal of General Psychology Vol 123(3) Jul 1996, 217-231. *Zimmerman, D. W. (2002). A warning about statistical significance tests performed on large samples of nonindependent observations: Perceptual and Motor Skills Vol 94(1) Feb 2002, 259-263. *Zimmerman, D. W. (2003). A warning about the large-sample Wilcoxon-Mann-Whitney test: Understanding Statistics Vol 2(4) Oct 2003, 267-280. *Zimmerman, D. W. (2004). Inflation of Type I Error Rates by Unequal Variances Associated with Parametric, Nonparametric, and Rank-Transformation Tests: Psicologica Vol 25(1) 2004, 103-133. *Zimmerman, D. W., Williams, R. H., & Zumbo, B. D. (1992). Correction of the Student t statistic for nonindependence of sample observations: Perceptual and Motor Skills Vol 75(3, Pt 1) Dec 1992, 1011-1020. *Zinkgraf, S. A. (1981). The statistical effects of the misidentification of selected stationary time series models: Dissertation Abstracts International. *Zumbo, B. D. (1996). Randomization test for coupled data: Perception & Psychophysics Vol 58(3) Apr 1996, 471-478. *Zwick, R. (1986). Rank and normal scores alternatives to Hotelling's Tsuperscript 2: Multivariate Behavioral Research Vol 21(2) Apr 1986, 169-186. *Zwick, R. (1993). Pairwise comparison procedures for one-way analysis of variance designs. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc. *Zwick, R., & Marascuilo, L. A. (1984). Selection of pairwise multiple comparison procedures for parametric and nonparametric analysis of variance models: Psychological Bulletin Vol 95(1) Jan 1984, 148-155. [[Category:Statistics]] [[Category:Experimental design]] [[Category:Spam filtering]] <!-- [[de:Falsch positiv]] [[ko:거짓 양성]] [[id:False positive]] [[nl:Fout-positief en fout-negatief]] [[pl:Błąd pierwszego rodzaju]] [[pt:Erro de tipo I]] [[su:Type I error]] --> {{enWP|False_positive}} Template:Browsebar (view source) Template:EnWP (view source) Template:StatsPsy (view source) Return to Type I errors.
CommonCrawl
Algorithms for Molecular Biology Software article Kohdista: an efficient method to index and query possible Rmap alignments Martin D. Muggli1, Simon J. Puglisi2 & Christina Boucher ORCID: orcid.org/0000-0001-9509-97253 Algorithms for Molecular Biology volume 14, Article number: 25 (2019) Cite this article Genome-wide optical maps are ordered high-resolution restriction maps that give the position of occurrence of restriction cut sites corresponding to one or more restriction enzymes. These genome-wide optical maps are assembled using an overlap-layout-consensus approach using raw optical map data, which are referred to as Rmaps. Due to the high error-rate of Rmap data, finding the overlap between Rmaps remains challenging. We present Kohdista, which is an index-based algorithm for finding pairwise alignments between single molecule maps (Rmaps). The novelty of our approach is the formulation of the alignment problem as automaton path matching, and the application of modern index-based data structures. In particular, we combine the use of the Generalized Compressed Suffix Array (GCSA) index with the wavelet tree in order to build Kohdista. We validate Kohdista on simulated E. coli data, showing the approach successfully finds alignments between Rmaps simulated from overlapping genomic regions. we demonstrate Kohdista is the only method that is capable of finding a significant number of high quality pairwise Rmap alignments for large eukaryote organisms in reasonable time. There is a current resurgence in generating diverse types of data, to be used alone or in concert with short read data, in order to overcome the limitations of short read data. Data from an optical mapping system [1] is one such example and has itself become more practical with falling costs of high-throughput methods. For example, the current BioNano Genomics Irys System requires one week and $1000 USD to produce the Rmap data for an average size eukaryote genome, whereas, it required $100,000 and 6 months in 2009 [2]. These technological advances and the demonstrated utility of optical mapping in genome assembly [3,4,5,6,7] have driven several recent tool development efforts [8,9,10]. Genome-wide optical maps are ordered high-resolution restriction maps that give the position of occurrence of restriction cut sites corresponding to one or more restriction enzymes. These genome-wide optical maps are assembled using an overlap-layout-consensus approach using raw optical map data, which are referred to as Rmaps. Hence, Rmaps are akin to reads in genome sequencing. In addition, to the the inaccuracies in the fragment sizes, there is the possibility of cut sites being spuriously added or deleted; which makes the problem of finding pairwise alignments between Rmaps challenging. To date, however, there is no efficient, non-proprietary method for finding pairwise alignments between Rmaps, which is the first step in assembling genome-wide maps. Several existing methods are superficially applicable to Rmap pairwise alignments but all programs either struggle to scale to even moderate size genomes or require significant further adaptation to the problem. Several methods exhaustively evaluate all pairs of Rmaps using dynamic programming. One of these is the method of Valouev et al. [11], which is capable of solving the problem exactly but requires over 100,000 CPU hours to compute the alignments for rice [12]. The others are SOMA [13] and MalignerDP [10] which are designed only for semi-global alignments instead of overlap alignments, which are required for assembly. Other methods reduce the number of map pairs to be individually considered by initially finding seed matches and then extending them through more intensive work. These include OMBlast [9], OPTIMA [8], and MalignerIX [10]. These, along with MalignerDP, were designed for a related alignment problem of aligning consensus data but cannot consistently find high quality Rmap pairwise alignments in reasonable time as we show later. This is unsurprising since these methods were designed for either already assembled optical maps or in silico digested sequence data for one of their inputs, both having a lower error rate than Rmap data. In addition, Muggli et al. [14] presented a method called Twin, which aligns assembled contigs to a genome-wide optimal map. Twin varies from these previous methods in that it is unable to robustly find alignments between pairs of Rmaps due to the presence of added or missing cut-sites. In this paper, we present a fast, error-tolerant method for performing pairwise Rmap alignment that makes use of a novel FM-index based data structure. Although the FM-index can naturally be applied to short read alignment [15, 16], it is nontrivial to apply it to Rmap alignment. The difficulty arises from: (1) the abundance of missing or false cut sites, (2) the fragment sizes require inexact fragment-fragment matches (e.g. 1547 bp and 1503 bp represent the same fragment), (3) the Rmap sequence alphabet consists of all unique fragment sizes and is so extremely large (e.g., over 16,000 symbols for the goat genome). The second two challenges render inefficient the standard FM-index backward search algorithm, which excels at exact matching over small alphabets since each step of the algorithm extends the search for a query string by a single character c. If the alphabet is small (say DNA alphabet) then a search for other symbols of the alphabet other than c can be incorporated without much cost to the algorithm's efficiency. Yet, if the alphabet is large enough this exhaustive search becomes impractical. The wavelet tree helps to remedy this problem. It allows efficiently answering queries of the form: find all symbols that allow extension of the backward search by a single character, where the symbol is within the range \([\alpha _1 \ldots \alpha _k]\) and where \(\alpha _1\) and \(\alpha _k\) are symbols in the alphabet such that \(\alpha _1\le \alpha _k\) [17]. In the case of optical mapping data, the alphabet is all fragment sizes. Thus, Muggli et al. [14] showed that constructing the FM-index and wavelet tree from this input can allow for sizing error to be account for by replacing each query in the FM index backward search algorithm with a range query supported by the wavelet tree, i.e., if the fragment size in the query string is x then the wavelet tree can support queries of the form: find all fragment sizes that allow extension of the backward search by a single fragment, where the fragment size in the range \([x - y, x + y]\) occur, where y is a threshold on the error tolerance. Muggli et al. [14] demonstrated that the addition of the wavelet tree can remedy the first two problems, i.e., sizing error and alphabet size, but the first and most-notable challenge requires a more complex index-based data structure. The addition of the wavelet tree to the FM-index is not enough to allow for searches that are robust to inserted and deleted cut sites. To overcome the challenge of having added or deleted cut sites while still accommodating the other two challenges, we develop Kohdista, an index-based Rmap alignment program that is capable of finding all pairwise alignments in large eukaryote organisms. We first abstract the problem to that of approximate-path matching in a directed acyclic graph (DAG). The Kohdista method then indexes a set of Rmaps represented as a DAG, using a modified form of the generalized compressed suffix array (GCSA), which is a variant of the FM-index developed by Sirén et al. [18]. Hence, the constructed DAG, which is stored using the GCSA, stores all Rmaps, along with all variations obtained by considering all speculative added and deleted cut sites. The GCSA stores the DAG in a manner such that paths in DAG may be queried efficiently. If we contrast this to naïve automaton implementations, the GCSA has two advantages: it is space efficient, and it allows for efficient queries. Lastly, we demonstrate that challenges posed by the inexact fragment sizes and alphabet size can be overcome, specifically in the context of the GCSA, via careful use of a wavelet tree [17], and via using statistical criteria to control the quality of the discovered alignments. Next, we point out some practical considerations concerning Kohdista. First, we note that Kohdista can be easily parallelized since once the GCSA is constructed from the Rmap data, it can be queried in parallel on as many threads as there are Rmaps to be queried. Next, in this paper, we focus on finding all pairwise alignments that satisfy some statistical constraints—whether they be global or local alignments. Partial alignments can be easily obtained by considering the prefix or suffix of the query Rmap and relaxing the statistical constraint. We verify our approach on simulated E. coli Rmap data by showing that Kohdista achieves similar sensitivity and specificity to the method of Valouev et al. [12], and with more permissive alignment acceptance criteria 90% of Rmap pairs simulated from overlapping genomic regions. We also show the utility of our approach on larger eukaryote genomes by demonstrating that existing published methods require more than 151 h of CPU time to find all pairwise alignments in the plum Rmap data; whereas, Kohdista requires 31 h. Thus, we present the first fully-indexed method capable of finding all match patterns in the pairwise Rmap alignment problem. Preliminaries and definitions Throughout we consider a string (or sequence) \(S = S[1 \ldots n] = S[1]S[2] \ldots S[n]\) of \(|S| = n\) symbols drawn from the alphabet \([1 \ldots \sigma ]\). For \(i=1, \ldots ,n\) we write S[i…n] to denote the suffix of S of length \(n-i+1\), that is \(S[i \ldots n] = S[i]S[i+1] \ldots S[n]\), and S[1…i] to denote the prefix of S of length i. S[i…j] is the substring \(S[i]S[i+1] \ldots S[j]\) of S that starts at position i and ends at j. Given a sequence S[1, n] over an alphabet \(\Sigma = \{1, \ldots ,\sigma \}\), a character \(c \in \Sigma\), and integers i,j, \({\textsf {rank}}_c(S,i)\) is the number of times that c appears in S[1, i], and \({\textsf {select}}_c(S,j)\) is the position of the j-th occurrence of c in S. We remove S from the functions when it is implicit from the context. Overview of optical mapping From a computer science viewpoint, restriction mapping (by optical or other means) can be seen as a process that takes in two sequences: a genome \({\mathsf {A}}[1,n]\) and a restriction enzyme's restriction sequence \({\mathsf {B}}[1,b]\), and produces an array (sequence) of integers \({\textsf {C}}\), the genome restriction map, which we define as follows. First define the array of integers \({\textsf {C}}[1,m]\) where \({\textsf {C}}[i] = j\) if and only if \({\mathsf {A}}[j \ldots j+b] = {\mathsf {B}}\) is the ith occurrence of \({\mathsf {B}}\) in \({\mathsf {A}}\). Then \({\textsf {R}}[i] = ({\textsf {C}}[i]-{\textsf {C}}[i-1])\), with \({\textsf {R}}[1] = {\textsf {C}}[1]-1\). In words, \({\textsf {R}}\) contains the distance between occurrences of \({\mathsf {B}}\) in \({\mathsf {A}}\). For example, if we let \({\mathsf {B}}\) be act and \({\mathsf {A}}= {\texttt {atacttactggactactaaact}}\) then we would have \({\textsf {C}}= 3,7,12,15,20\) and \({\textsf {R}}= 2,4,5,3,5\). In reality, \({\textsf {R}}\) is a consensus sequence formed from millions of erroneous Rmap sequences. The optical mapping system produces millions of Rmaps for a single genome. It is performed on many cells of an organism and for each cell there are thousands of Rmaps (each at least 250 Kbp in length in publicly available data). The Rmaps are then assembled to produce a genome-wide optical map. Like the final \({\textsf {R}}\) sequence, each Rmap is an array of lengths—or fragment sizes—between occurrences of \({\mathsf {B}}\) in \({\mathsf {A}}\). There are three types of errors that an Rmap (and hence with lower magnitude and frequency, also the consensus map) can contain: (1) missing and false cuts, which are caused by an enzyme not cleaving at a specific site, or by random breaks in the DNA molecule, respectively; (2) missing fragments that are caused by desorption, where small (\(< 1\) Kbp ) fragments are lost and so not detected by the imaging system; and (3) inaccuracy in the fragment size due to varying fluorescent dye adhesion to the DNA and other limitations of the imaging process. Continuing again with the example above where \({\textsf {R}}= 2,4,5,3,5\) is the error-free Rmap: an example of an Rmap with the first type of error could be \({\textsf {R}}' = 6,5,3,5\) (the first cut site is missing so the fragment sizes 2, and 4 are summed to become 6 in \({\textsf {R}}'\)); an example of an Rmap with the second type of error would be \({\textsf {R}}'' = 2,4,3,5\) (the third fragment is missing); and lastly, the third type of error could be illustrated by \({\textsf {R}}''' = 2,4,7,3,5\) (the size of the third fragment is inaccurately given). Frequency of errors In the optical mapping system, there is a 20% probability that a cut site is missed and a 0.15% probability of a false break per Kbp, i.e., error type (1) occurs in a fragment. Popular restriction enzymes in optical mapping experiments recognize a 6 bp sequence giving an expected cutting density of 1 per 4096 bp. At this cutting density, false breaks are less common than missing restriction sites (approx. \(0.25 * .2 = .05\) for missing sites vs. 0.0015 for false sites per bp). The error in the fragment size is normally distributed with a mean of 0 bp, and a variance of \(\ell \sigma ^2\), where \(\ell\) is equal to the fragment length and \(\sigma = .58\) kbp [11]. Suffix arrays, BWT and backward search The suffix array [19] \({\textsf {SA}}_{{\mathsf {X}}}\) (we drop subscripts when they are clear from the context) of a sequence \({\mathsf {X}}\) is an array \({\textsf {SA}}[1 \ldots n]\) which contains a permutation of the integers [1...n] such that \({\mathsf {X}}[{\textsf {SA}}[1] \ldots n]< {\mathsf {X}}[{\textsf {SA}}[2] \ldots n]< \cdots < {\mathsf {X}}[{\textsf {SA}}[n] \ldots n].\) In other words, \({\textsf {SA}}[j] = i\) iff \({\mathsf {X}}[i \ldots n]\) is the \(j{\text{ th }}\) suffix of \({\mathsf {X}}\) in lexicographic order. For a sequence \({\mathsf {Y}}\), the \({\mathsf {Y}}\)-interval in the suffix array \({\textsf {SA}}_{{\mathsf {X}}}\) is the interval \({\textsf {SA}}[s \ldots e]\) that contains all suffixes having \({\mathsf {Y}}\) as a prefix. The \({\mathsf {Y}}\)-interval is a representation of the occurrences of \({\mathsf {Y}}\) in \({\mathsf {X}}\). For a character c and a sequence \({\mathsf {Y}}\), the computation of \(c{\mathsf {Y}}\)-interval from \({\mathsf {Y}}\)-interval is called a left extension. The Burrows–Wheeler Transform \({\textsf {BWT}}[1 \ldots n]\) is a permutation of \({\mathsf {X}}\) such that \({\textsf {BWT}}[i] = {\mathsf {X}}[{\textsf {SA}}[i]-1]\) if \({\textsf {SA}}[i]>1\) and $ otherwise [20]. We also define \({\textsf {LF}}[i] = j\) iff \({\textsf {SA}}[j] = {\textsf {SA}}[i]-1\), except when \({\textsf {SA}}[i] = 1\), in which case \({\textsf {LF}}[i] = I\), where \({\textsf {SA}}[I] = n\). Ferragina and Manzini [21] linked \({\textsf {BWT}}\) and \({\textsf {SA}}\) in the following way. Let \({\textsf {C}}[c]\), for symbol c, be the number of symbols in \({\mathsf {X}}\) lexicographically smaller than c. The function \({\textsf {rank}}({\mathsf {X}},c,i)\), for sequence \({\mathsf {X}}\), symbol c, and integer i, returns the number of occurrences of c in \({\mathsf {X}}[1 \ldots i]\). It is well known that \({\textsf {LF}}[i] = {\textsf {C}}[{\textsf {BWT}}[i]] + {\textsf {rank}}({\textsf {BWT}},{\textsf {BWT}}[i],i)\). Furthermore, we can compute the left extension using \({\textsf {C}}\) and \({\textsf {rank}}\). If \({\textsf {SA}}[s \ldots e]\) is the \({\mathsf {Y}}\)-interval, then \({\textsf {SA}}[{\textsf {C}}[c]+{\textsf {rank}}({\textsf {BWT}},c,s),{\textsf {C}}[c]+{\textsf {rank}}({\textsf {BWT}},c,e)]\) is the \(c{\mathsf {Y}}\)-interval. This is called backward search, and a data structure supporting it is called an FM-index [21]. To support rank queries in backward search, a data structure called a wavelet tree can be used [17]. It occupies \(n\log \sigma + o(n\log \sigma )\) bits of space and supports \({\textsf {rank}}\) queries in \(O(\log \sigma )\) time. Wavelet trees also support a variety of more complex queries on the underlying string efficiently. We refer the reader to Gagie et al. [17] for a more thorough discussion of wavelet trees. One such query we will use in this paper is to return the set X of distinct symbols occurring in S[i, j], which takes \(O(|X|\log \sigma )\) time. The pairwise Rmap alignment problem The pairwise Rmap alignment problem aims to align one Rmap (the query) \({\textsf {R}}_q\) against the set of all other Rmaps in the dataset (the target). We denote the target database as \({\textsf {R}}_1 \ldots {\textsf {R}}_n\), where each \({\textsf {R}}_i\) is a sequence of \(m_i\) fragment sizes, i.e, \({\textsf {R}}_i = [f_{i1}, \ldots , f_{im_i}]\). An alignment between two Rmaps is a relation between them comprising groups of zero or more consecutive fragment sizes in one Rmap associated with groups of zero or more consecutive fragments in the other. For example, given \({\textsf {R}}_i = [4, 5, 10, 9, 3]\) and \({\textsf {R}}_j = [10, 9, 11]\) one possible alignment is \(\{[4,5], [10]\}, \{ [10], [9]\}, \{[9], [11]\}, \{[3], []\}\). A group may contain more than one fragment (e.g. [4, 5]) when the restriction site delimiting the fragments is absent in the corresponding group of the other Rmap (e.g [10]). This can occur if there is a false restriction site in one Rmap, or there is a missing restriction site in the other. Since we cannot tell from only two Rmaps which of these scenarios occurred, for the purpose of our remaining discussion it will be sufficient to consider only the scenario of missed (undigested) restriction sites. We now describe the algorithm behind Kohdista. Three main insights enable our index-based aligner for Rmap data: (1) abstraction of the alignment problem to a finite automaton; (2) use of the GCSA for storing and querying the automaton; and (3) modification of backward search to use a wavelet tree in specific ways to account for the Rmap error profile. Finite automaton Continuing with the example in the background section, we want to align \({\textsf {R}}' = 6,5,3,5\) to \({\textsf {R}}''' = 2,4,7,3,5\) and vice versa. To accomplish this we cast the Rmap alignment problem to that of matching paths in a finite automaton. A finite automaton is a directed, labeled graph that defines a language, or a specific set of sequences composed of vertex labels. A sequence is recognized by an automaton if it contains a matching path: a consecutive sequence of vertex labels equal to the sequence. We represent the target Rmaps as an automaton and the query as a path in this context. The automaton for our target Rmaps can be constructed as follows. First, we concatenate the \({\textsf {R}}_1 \ldots {\textsf {R}}_n\) together into a single sequence with each Rmap separated by a special symbol which will not match any query symbol. Let \({\textsf {R}}^*\) denote this concatenated sequence. Hence, \({\textsf {R}}^* = [f_{11}, \ldots ,f_{1m_1}, \ldots , f_{n1}, \ldots ,f_{nm_n}]\). Then, construct an initial finite automaton \({\mathsf {A}}= (V, E)\) for \({\textsf {R}}^*\) by creating a set of vertices \(v^i_1 \ldots v^i_m\), one vertex per fragment for a total of \(|{\textsf {R}}^*|\) vertices and each vertex is labeled with the length its corresponding fragment. Edges are then added connecting vertices representing consecutive pairs of elements in \({\textsf {R}}^*\). Also, introduce to \({\mathsf {A}}\) a starting vertex \(v_1\) labeled with # and a final vertex \(v_f\) labeled with the character $. All other vertices in \({\mathsf {A}}\) are labeled with integral values. This initial set of vertices and edges is called the backbone. The backbone by itself is only sufficient for finding alignments with no missing cut sites in the query. The backbone of an automaton constructed for a set containing \({\textsf {R}}'\) and \({\textsf {R}}''\) would be #, 6, 5, 3, 5, 999, 2, 4, 3, 5$, using 999 as an unmatchable value. Next, extra vertices ("skip vertices") and extra edges are added to \({\mathsf {A}}\) to allow for the automaton to accept all valid queries. Figure 1a illustrates the construction of \({\mathsf {A}}\) for a single Rmap with fragment sizes 2, 3, 4, 5, 6. Skip vertices and skip edges We introduce extra vertices labeled with compound fragments to allow missing cut sites (first type of error) to be taken into account in querying the target Rmaps. We refer to these as skip vertices as they provide alternative path segments which skip past two or more backbone vertices. Thus, we add a skip vertex to \({\mathsf {A}}\) for every pair of consecutive vertices in the backbone, as well as every triple of consecutive vertices in the backbone, and label these vertices as the sum of the corresponding vertices. For example, vertex labeled with 7 connecting 2 and 5 in 1a is an example of a skip vertex. Likewise, 5, 9, 11 are other skip vertices. Skip vertices corresponding to a pair of vertices in the backbone would correspond to a single missing cut-site and similarly, skip vertices corresponding to a trip of vertices in the backbone correspond to two consecutive missing cut-sites. The probability of more than two consecutive missing cut-sites is negligible [11], and thus, we do not consider more than pairs or triples of vertices. Finally, we add skip edges which provide paths around vertices with small labels in the backbone. These allow allow for desorption (the second type of error) to be taken into account in querying \({\textsf {R}}^*\). Example automata and corresponding memory representation Generalized compressed suffix array We index the automaton with the GCSA for efficient storage and path querying. The GCSA is a generalization of the FM-index for automata. We will explain the GCSA by drawing on the definition of the more widely-known FM-index. As stated in the background section, the FM-index is based on the deep relationship between the \({\textsf {SA}}\) and the \({\textsf {BWT}}\) data structures of the input string \({\mathsf {X}}\). The \({\textsf {BWT}}\) of an input string is formed by sorting all characters of the string by the lexicographic order of the suffix immediately following each character. The main properties the FM-index exploits in order to perform queries efficiently are (a) \({\textsf {BWT}}[i] = {\mathsf {X}}[{\textsf {SA}}[i]-1]\); and (b) given that \({\textsf {SA}}[i] = j\), and \({\textsf {C}}[c]\) gives the position of the first suffix in \({\textsf {SA}}\) prefixed with character c, then using small auxiliary data structures we can quickly determine \(k = {\textsf {C}}[{\textsf {BWT}}[i]] + {\textsf {rank}}({\textsf {BWT}},{\textsf {BWT}}[i],i)\), such that \({\textsf {SA}}[k] = j-1\). The first of these properties is simply the definition of the \({\textsf {BWT}}\). The second is, because the symbols of \({\mathsf {X}}\) occur in the same order in both the single character prefixes in the suffix array and in the \({\textsf {BWT}}\), given a set of sorted suffixes, prepending the same character onto each suffix does not change their order. Thus, if we consider all the suffixes in a range of \({\textsf {SA}}\) which are preceded by the same symbol c, that subset will appear in the same relative order in \({\textsf {SA}}\): as a contiguous subinterval of the interval that contains all the suffixes beginning with c. Hence, by knowing where the position of the internal in \({\textsf {SA}}\) corresponding to a symbol, and the \({\textsf {rank}}\) of an instance of that symbol, we can identify the \({\textsf {SA}}\) position beginning with that instance from its position in \({\textsf {BWT}}\). A rank data structure over the \({\textsf {BWT}}\) constitutes a sufficient compressed index of the suffix array needed for traversal. To generalize the FM-index to automata from strings, we need to efficiently store the vertices and edges in a manner such that the FM-index properties still hold, allowing the GCSA to support queries efficiently. An FM-index's compressed suffix array for a string S encodes a relationship between each suffix S and its left extension. Hence, this suffix array can be generalized to edges in a graph that represent a relationship between vertices. The compressed suffix array for a string is a special case where the vertices are labeled with the string's symbols in a non-branching path. Prefix-sorted automata Just as backward search for strings is linked to suffix sorting, backward searching in the BWT of the automaton requires us to be able to sort the vertices (and a set of the paths) of the automaton in a particular way. This property is called prefix-sorted by Sirén et al. [18]. Let \(A = (V,E)\) be a finite automaton, let \(v_{|V|}\) denote its terminal vertex, and let \(v \in V\) be a vertex. We say v is prefix-sorted by prefix p(v) if the labels of all paths from v to \(v_{|V|}\) share a common prefix p(v), and no path from any other vertex \(u \ne v\) to \(v_{|V|}\) has p(v) as a prefix of its label. Automaton A is prefix-sorted if all vertices are prefix-sorted. See Fig. 1a for an example of a non-prefix sorted automaton and a prefix sorted automaton. A non-prefix sorted automaton can be made prefix sorted through a process of duplicating vertices and their incoming edges but dividing their outgoing edges between the new instances. We refer the reader to Sirén et al. [18]) for a more thorough explanation of how to transform a non-prefix sorted automaton to a prefix-sorted one. Clearly, the prefixes p(v) allow us to sort the vertices of a prefix-sorted automaton into lexicographical order. Moreover, if we consider the list of outgoing edges (u, v), sorted by pairs (p(u), p(v)), they are also sorted by the sequences \(\ell (u)p(v)\), where \(\ell (u)\) denotes the label of vertex u. This dual sortedness property allows backward searching to work over the list of vertex labels (sorted by p(v)) in the same way that is does for the symbols of a string ordered by their following suffixes in normal backward search for strings. Each vertex has a set of one or more preceding vertices and therefore, a set of predecessor labels in the automaton. These predecessor label sets are concatenated to form the \({\textsf {BWT}}\). The sets are concatenated in the order defined by the above mentioned lexicographic ordering of the vertices. Each element in \({\textsf {BWT}}\) then denotes an edge in the automaton. Another bit vector, \({\textsf {F}}\), marks a '1' for the first element of \({\textsf {BWT}}\) corresponding to a vertex and a '0' for all subsequent elements in that set. Thus, the predecessor labels, and hence the associated edges, for a vertex with rank r are \({\textsf {BWT}}[{\textsf {select}}_1({\textsf {F}},r) \ldots {\textsf {select}}_1({\textsf {F}},r+1)]\). Another array, \({\textsf {M}}\), stores the outdegree of each vertex and allows the set of vertex ranks associated with a \({\textsf {BWT}}\) interval to be found using \({\textsf {rank}}()\) queries. Exact matching: GCSA backward search Exact matching with the GCSA is similar to the standard FM-index backward search algorithm. As outlined in the background section, FM-index backward search proceeds by finding a succession of lexicographic ranges that match progressively longer suffixes of the query string, starting from the rightmost symbol of the query. The search maintains two items—a lexicographic range and an index into the query string—and the property that the path prefix associated with the lexicographic range is equal to the suffix of the query marked by the query index. Initially, the query index is at the rightmost symbol and the range is [1…n] since every path prefix matches the empty suffix. The search continues using GCSA's backward search step function, which takes as parameters the next symbol (to the left) in the query (i.e. fragment size in \({\textsf {R}}_q\)) and the current range, and returns a new range. The query index is advanced leftward after each backward search step. In theory, since the current range corresponds to a consecutive range in the \({\textsf {BWT}}\), the backward search could use \({\textsf {select}}()\) queries on the bit vector \({\textsf {F}}\) (see above) to determine all the edges adjacent to a given vertex and then two FM-index \({\textsf {LF}}()\) queries are applied to the limits of the current range to obtain the new one. GCSA's implementation uses one succinct bit vector per alphabet symbol to encode which symbols precede a given vertex instead of \({\textsf {F}}\). Finally, this new range, which corresponds to a set of edges, is mapped back to a set of vertices using \({\textsf {rank}}()\) on the \({\textsf {M}}\) bit vector. Inexact matching: modified GCSA backward search We modified GCSA backward search in the following ways. First, we modified the search process to combine consecutive fragments into compound fragments in the query Rmap in order to account for erroneous cut-sites. Secondly, we added and used a wavelet tree in order to allow efficient retrieval of substitution candidates to account for sizing error. Lastly, we introduced backtracking to allow aligning Rmaps in the presence of multiple alternative size substitutions candidates as well as alternative compound fragments for each point in the query. We now discuss these modifications in further detail below. To accommodate possible false restriction sites that are present in the query Rmap, we generate compound fragments by summing pairs and triples of consecutive query fragment sizes. This summing of multiple consecutive query fragments is complementary to the skip vertices in the target automaton which accommodate false restriction sites in the target. We note for each query Rmap there will be multiple combinations of compound fragments generated. Next, in order to accommodate possible sizing error in the Rmap data, we modified the backward search by adding and using a wavelet tree in our query of the GCSA. The original implementation of the GCSA does not construct or use the wavelet tree. Although it does consider alignments containing mismatches, it is limited to small alphabets (e.g., DNA alphabet), which do not necessitate the use of the wavelet tree. Here, the alphabet size is all possible fragment sizes. Thus, we construct the wavelet tree in addition to the GCSA. Then when aligning fragment f in the query Rmap, we determine the set of candidate fragment sizes that are within some error tolerance of f by enumerating the distinct symbols in the currently active backward search range of the \({\textsf {BWT}}\) using the wavelet tree algorithm of Gagie et al. [17]. As previously mentioned, this use of the wavelet tree also exists in the Twin [14] but is constructed and used in conjunction with an FM-index. We used the SDSL-Lite library by Gog et al. [22] to construct and store the GCSA. Finally, since there may be multiple alternative size compatible candidates in the \({\textsf {BWT}}\) interval of \({\textsf {R}}^*\) for a compound fragment and multiple alternative compound fragments generated at a given position in query Rmap, we add backtracking to backward search so each candidate alignment is evaluated. We note that this is akin to the use of backtracking algorithms in short read alignment [15, 16]. Thus, for a given compound fragment size f generated from \({\textsf {R}}_q\), every possible candidate fragment size, \(f'\), that can be found in \({\textsf {R}}^*\) in the range \(f - t \ldots f + t\) and in the interval \(s \ldots e\) (of the \({\textsf {BWT}}\) of \({\textsf {R}}^*\)) for some tolerance t is considered as a possible substitute in the backward search. Thus, to recap, when attempting to align each query Rmap, we consider every possible combination of compound fragments and use the wavelet tree to determine possible candidate matches during the backward search. There are potentially a large number of possible candidate alignments—for efficiency, these candidates are pruned by evaluating the alignment during each step of the search relative to statistical models of the expected error in the data. We discuss this pruning in the next subsection. Pruning the search Alignments are found by incrementally extending candidate partial alignments (paths in the automaton) to longer partial alignments by choosing one of several compatible extension matches (adjacent vertices to the end of a path in the automaton). To perform this search efficiently, we prune the search by computing the Chi-squared CDF and binomial CDF statistics of the partial matches and use thresholds to ensure reasonable size agreement of the matched compound fragments, and the frequency of putative missing cut sites. We conclude this section by giving an example of the backward search. Size agreement We use the Chi-squared CDF statistic to assess size agreement. This assumes the fragment size errors are independent, normally distributed events. For each pair of matched compound fragments in a partial alignment, we take the mean between the two as the assumed true length and compute the expected standard deviation using this mean. Each compound fragment deviates from the assumed true value by half the distance between them. These two values contribute two degrees of freedom to the Chi-squared calculation. Thus, each deviation is normalized by dividing by the expected standard deviation, these are squared, and summed across all compound fragments to generate the Chi-squared statistic. We use the standard Chi-squared CDF function to compute the area under the curve of the probability mass function up to this Chi-squared statistic, which gives the probability two Rmap segments from common genomic origin would have a Chi-squared statistic no more extreme than observed. This probability is compared to Kohdista's chi-squared-cdf-thresh and if smaller, the candidate compound fragment is assumed to be a reasonable match and the search continues. Cut site error frequency We use the Binomial CDF statistic to assess the probability of the number of cut site errors in a partial alignment. This assumes missing cut site errors are independent, Bernoulli processes events. We account for all the putatively conserved cut sites on the boundaries and those delimiting compound fragments in both partially aligned Rmaps plus twice the number of missed sites as the number of Bernoulli trials. We use the standard binomial CDF function to compute the sum of the probability density function up to the number of non-conserved cut sites in a candidate match. Like the size agreement calculation above, this gives the probability two Rmaps of common genomic origin would have the number of non-conserved sites seen or fewer in the candidate partial alignment under consideration. This is compared to the binom-cdf-thresh to decide whether to consider extensions to the given candidate partial alignment. Thus, given a set of Rmaps and input parameters binom-cdf-thresh and chi-squared-cdf-thresh, we produce the set of all Rmap alignments that have a Chi-squared CDF statistic less than chi-squared-cdf-thresh and a binomial CDF statistic less than binom-cdf-thresh. Both of these are subject to the additional constraint of a maximum consecutive missed restriction site run between aligned sites of two and a minimum aligned site set cardinality of 16. Example traversal A partial search for a query Rmap [3 kb, 7 kb, 6 kb] in Fig. 1a and Table (b) given an error model with a constant 1 kb sizing error would proceed with steps: Start with the semi-open interval matching the empty string [0…12). A wavelet tree query on \({\textsf {BWT}}\) would indicate the set of symbols {5, 6, 7} is the intersection of two sets: (a) the set of symbols that would all be valid left extensions of the (currently empty) match string and (b) the set of size appropriate symbols that match our next query symbol (i.e. 6 kb, working from the right end of our query) in light of the expected sizing error (i.e. 6kb +/− 1 kb). We would then do a GCSA backward search step on the first value in the set (5) which would yield the new interval [4…7). This new interval denotes only nodes where each node's common prefix is compatible with the spelling of our current backward traversal path through the automaton (i.e. our short path of just [5] does not contradict any path spellable from any of the three nodes denoted in the match interval). A wavelet tree query on the \({\textsf {BWT}}\) for this interval for values 7 kb +/− 1 kb would return the set of symbols {7}. Another backward search step would yield the new interval [8…9). At this point our traversal path would be [7, 5] (denoted as a left extension of a forward path that we are building by traversing the graph backward). The common prefix of each node (only one node here) in our match interval (i.e. [7 kb]) is compatible with the path [7, 5]. This process would continue until backward search returns no match interval or our scoring model indicates our repeatedly left extended path has grown too divergent from our query. At this point backtracking would occur to find other matches (e.g. at some point we would backward search using the value 6 kb instead of the 5 kb obtained in step 2.) Table 1 Performance on simulated E. coli dataset Practical considerations In this section we describe some of the practical considerations that were made in the implementation. Memoization One side effect of summing consecutive fragments in both the search algorithm and the target data structure is that several successive search steps with agreeing fragment sizes will also have agreeing sums of those successive fragments. In this scenario, proceeding deeper in the search space will result in wasted effort. To reduce this risk, we maintain a table of scores obtained when reaching a particular lexicographic range and query cursor pair. We only proceed with the search past this point when either the point has never been reached before, or has only been reached before with inferior scores. Wavelet tree threshold The wavelet tree allows efficiently finding the set of vertex labels that are predecessors of the vertices in the current match interval intersected with the set of vertex labels that would be compatible with the next compound fragment to be matched in the query. However, when the match interval is sufficiently small (\(< 750\)) it is faster to scan the vertices in \({\textsf {BWT}}\) directly. The alphabet of fragment sizes can be large considering all the measured fragments from multiple copies of the genome. This can cause an extremely large branching factor for the initial symbol and first few extensions in the search. To improve the efficiency of the search, the fragment sizes are initially quantized, thus reducing the size of the effective alphabet and the number of substitution candidates under consideration at each point in the search. Quantization also increases the number of identical path segments across the indexed graph which allows a greater amount of candidate matches to be evaluated in parallel because they all fall into the same \({\textsf {BWT}}\) interval during the search. This does, however, introduce some quantization error into the fragment sizes, but the bin size is chosen to keep this small in comparison to the sizing error. We evaluated Kohdista against the other available optical map alignment software. Our experiments measured runtime, peak memory, and alignment quality on simulated E. coli Rmaps and experimentally generated plum Rmaps. All experiments were performed on Intel Xeon computers with \(\ge\) 16 GB RAM running 64-bit Linux. Parameters used We tried OPTIMA with both "p-value" and "score" scoring and the allMaps option and report the higher sensitivity "score" setting. We followed the OPTIMA-Overlap protocol of splitting Rmaps into k-mers, each containing 12 fragments as suggested in [8]. For OMBlast, we adjusted parameters maxclusteritem, match, fpp, fnp, meas, minclusterscore, and minconf. For MalignerDP, we adjusted parameters max-misses, miss-penalty, sd-rate, min-sd, and max-miss-rate and additionally filtered the results by alignment score. Though unpublished, for comparison we also include the proprietary RefAligner software from BioNano. For RefAligner we adjusted parameters FP, FN, sd, sf, A, and S. For Kohdista, we adjusted parameters chi-squared-cdf-thresh and binom-cdf-thresh. For the method of Valouev et al. [12], we adjusted score_thresh and t_score_thresh variables in the source. In Table 1 we report statistical and computational performance for each method. OMBlast was configured with parameters meas = 3000, minconf = 0.09, minmatch = 15 and the rest left at defaults. RefAligner was run with parameters FP = 0.15, sd = 0.6, sf = 0.2, sr = 0.0, se = 0.0, A = 15, S = 22 and the rest left at defaults. MalignerDP was configured with parameters ref-max-misses = 2, query-miss-penalty = 3, query-max-miss-rate = 0.5, min-sd = 1500, and the rest left at defaults. The method of Valouev et al. [12] was run with default parameters except we reduced the maximum compound fragment length (their \(\delta\) parameter) from 6 fragments to 3. We observed this method rarely included alignments containing more than two missed restriction sites in a compound fragment. Performance on simulated E. coli Rmap data To verify the correctness of our method, we simulated a read set from a 4.6 Mbp E. coli reference genome as follows: we started with 1,400 copies of the genome, and then generated 40 random loci within each. These loci form the ends of molecules that would undergo digestion. Molecules smaller than 250 Kbp were discarded, leaving 272 Rmaps with a combined length equating to 35x coverage depth. The cleavage sites for the XhoI enzyme were then identified within each of these simulated molecules. We removed 20% of these at random from each simulated molecule to model partial digestion. Finally, normally distributed noise was added to each fragment with a standard deviation of .58 kb per 1 kb of the fragment. This simulation resulted in 272 Rmaps. Simulated molecule pairs having 16 common conserved digestion sites become the set of "ground truth" alignments, which our method and other methods should successfully identify. Our simulation resulted in 4,305 ground truth alignments matching this criteria. Although a molecule would align to itself, these are not included in the ground truth set. This method of simulation was based on the E. coli statistics given by Valouev et al. [12] and resulting in a molecule length distribution as observed in publicly available Rmap data from OpGen, Inc. Most methods were designed for less noisy data but in theory could address all the data error types required. For methods with tunable parameters, we tried aligning the E. coli Rmaps with combinations of parameters for each method related to its alignment score thresholds and error model parameters. We used parameterization giving results similar to those for the default parameters of the method of Valouev et al. [12] to the extent such parameters did not significantly increase the running time of each method. These same parameterization were used in the next section on plum data. Even with tuning, we were unable to obtain pairwise alignments on E. coli for two methods. We found OPTIMA only produced self alignments with its recommended overlap protocol and report its resource use in Table 1. For MalignerIX, even when we relaxed the parameters to account for the greater sizing error and mismatch cut site frequency, it was also only able to find self alignments. This is expected as by design it only allows missing sites in one sequence in order to run faster. Thus no further testing was performed with MalignerIX or OPTIMA. We did not test SOMA [13] as earlier investigation indicate it would not scale to larger genomes [14]. We omitted TWIN [14] as it needs all cut sites to match. With tuning, Kohdista, MAligner, the method of Valouev et al. [12], RefAligner and OMBlast produced reasonable results on the E.coli data. Results for the best combinations of parameters encountered during tuning can be seen in Figs. 2 and 3. We observed that most methods could find more ground truth alignments as their parameters were relaxed at the expense of additional false positives, as illustrated in these figures. However, only the method of Valouev et al. and Kohdista approached recall of all ground truth alignments. Precision-recall plot of successful methods on simulated E. coli ROC plot of successful methods on simulated E. coli Table 1 illustrates the results for Kohdista and the competing methods with parameters optimized to try to match those of Valouev et al. [12], as well as results using Kohdista with a more permissive parameter setting. All results include both indexing as well as search time. Kohdista took two seconds to build its data structures. Again, Kohdista uses Chi-squared and binomial CDF thresholds to prune the backtracking search when deciding whether to extend alignments to progressively longer alignments. More permissive match criteria, using higher thresholds, allows more Rmaps to be reached in the search and thus to be considered aligned, but it also results in less aggressive pruning in the search, thus lengthening runtime. As an example, note that when Kohdista was configured with a much relaxed Chi-squared CDF threshold of .5 and a binomial CDF threshold of .7, it found 3925 of the 4305 (91%) ground truth alignments, but slowed down considerably. This illustrates the index and algorithm's capability in handling all error types and achieving high recall. Performance on plum Rmap data The Beijing Forestry University and other institutes assembled the first plum (Prunus mume) genome using short reads and optical mapping data from OpGen Inc. We test the various available alignment methods on the 139,281 plum Rmaps from June 2011 available in the GigaScience repository. These Rmaps were created with the BamHI enzyme and have a coverage depth of 135x of the 280 Mbp genome. For the plum dataset, we ran all the methods which approach the statistical performance of the method of Valouev et al. [12] when measured on E. coli. Thus, we omitted MalignerIX and OPTIMA because they had 0% recall and precision on E. coli. Our results on this plum dataset are summarized in Table 2. Table 2 Performance on plum Kohdista was the fastest and obtained more alignments than the competing methods. When configured with the Chi-squared CDF threshold of .02 and binomial CDF threshold of .5, it took 31 h of CPU time to test all Rmaps for pairwise alignments in the plum Rmap data. This represents a 21× speed-up over the 678 h taken by the dynamic programming method of Valouev et al. [12]. MalignerDP and OMBlast took 214 h and 151 h, respectively. Hence, Kohdista has a 6.9× and 4.8× speed-up over MalignerDP and OMBlast. All methods used less than 13 GB of RAM and thus, were considered practical from a memory perspective. Kohdista took 11 min to build its data structures for Plum. To measure the quality of the alignments, we scored each pairwise alignment using Valouev et al. [12] and presented histograms of these alignment scores in Fig. 4. For comparison, we also scored and present the histogram for random pairs of Rmaps. The method of Valouev et al. [12] produces very few but high-scoring alignments and although it could theoretically be altered to produce a larger number of alignments, the running time makes this prospect impractical (678 h). Although Kohdista and RefAligner produce high-quality alignments, RefAligner produced very few alignments (10,039) and required almost 5x more time to do so. OMBlast and Maligner required significantly more time and produced significantly lower quality alignments. A comparison between the quality of the scores of the alignments found by the various methods on the plum data. All alignments were realigned using the dynamic programming method of Valouev et al. [12] in order to acquire a score for each alignment. Hence, the method finds the optimal alignment using a function balancing size agreement and cut site agreement known as a S-score. The following alignments were considered: a those obtained from aligning random pairs of Rmaps; b those obtained from the method of Valouev et al. [12]; c those obtained from Kohdista; d those obtained from MalignerDP; e those obtained from OMBlast; and lastly, f those obtained from BioNano's commercial RefAligner In this paper, we demonstrate how finding pairwise alignments in Rmap data can be modelled as approximate-path matching in a directed acyclic graph, and combining the GCSA with the wavelet tree results in an index-based data structure for solving this problem. We implement this method and present results comparing Kohdista with competing methods. By demonstrating results on both simulated E. coli Rmap data and real plum Rmaps, we show that Kohdista is capable of detecting high scoring alignments in efficient time. In particular, Kohdista detected the largest number of alignments in 31 h. RefAligner, a proprietary method, produced very few high scoring alignments (10,039) and requires almost 5× more time to do so. OMBlast and Maligner required significantly more time and produced significantly lower quality alignments. The method of Valouev et al. [12] produced high scoring alignments but required more than 21× time to do. Project name: Kohdista. Project home page: https://github.com/mmuggli/KOHDISTA. Operating system(s): Developed for 32-bit and 64-bit Linux/Unix environments. Programming language: C/C++. Other requirements: GCC 4.2 or newer. License: MIT license. Any restrictions to use by non-academics: Non-needed. Kohdista is available at https://github.com/mmuggli/KOHDISTA/. No original data was acquired for this research. The simulated E.coli data generated and analysed during the current study are available at https://github.com/mmuggli/KOHDISTA. The plum (Prunus mume) dataset used in this research was acquired from the GigaScience repository http://gigadb.org/dataset/view/id/100084/File_sort/size. directed acyclic graph (DAG) suffix array GCSA: BWT: Burrows–Wheeler Transform Dimalanta ET, Lim A, Runnheim R, Lamers C, Churas C, Forrest DK, de Pablo JJ, Graham MD, Coppersmith SN, Goldstein S, Schwartz DC. A microfluidic system for large DNA molecule arrays. Anal Chem. 2004;76(18):5293–301. Bionano Genomics Ilc. Bionano Genomics Launches Irys, a novel platform for complex human genome analysis; 2012. https://bionanogenomics.com/press-releases/bionano-genomics-launches-irys-a-novel-platform-for-complex-human-genome-analysis/. Reslewic S, et al. Whole-genome shotgun optical mapping of Rhodospirillum rubrum. Appl Environ Microbiol. 2005;71(9):5511–22. Zhou S, et al. A whole-genome shotgun optical map of Yersinia pestis strain KIM. Appl Environ Microbiol. 2002;68(12):6321–31. Zhou S, et al. Shotgun optical mapping of the entire Leishmania major Friedlin genome. Mol Biochem Parasitol. 2004;138(1):97–106. Chamala S, et al. Assembly and validation of the genome of the nonmodel basal angiosperm amborella. Science. 2013;342(6165):1516–7. Dong Y, et al. Sequencing and automated whole-genome optical mapping of the genome of a domestic goat ( capra hircus). Nat Biotechnol. 2013;31(2):136–41. Verzotto D, et al. Optima: sensitive and accurate whole-genome alignment of error-prone genomic maps by combinatorial indexing and technology-agnostic statistical analysis. GigaScience. 2016;5(1):2. Leung AK, Kwok T-P, Wan R, Xiao M, Kwok P-Y, Yip KY, Chan T-F. OMBlast: alignment tool for optical mapping using a seed-and-extend approach. Bioinformatics. 2017;33(3):311–9. Mendelowitz LM, Schwartz DC, Pop M. Maligner: a fast ordered restriction map aligner. Bioinformatics. 2016;32(7):1016–22. Valouev A, o Li L, Liu Y-C, Schwartz DC, Yang Y, Zhang Y, Waterman MS. Alignment of optical maps. J Comput Biol. 2006;13(2):442–62. Valouev A, et al. An algorithm for assembly of ordered restriction maps from single DNA molecules. Proc Natl Acad Sci. 2006;103(43):15770–5. Nagarajan N, Read TD, Pop M. Scaffolding and validation of bacterial genome assemblies using optical restriction maps. Bioinformatics. 2008;24(10):1229–35. Muggli MD, Puglisi SJ, Boucher C. Efficient indexed alignment of contigs to optical maps. In: Proceedings of the 14th workshop on algorithms in bioinformatics (WABI). Berlin: Springer; 2014. p. 68–81. Li H, Durbin R. Fast and accurate short read alignment with Burrows–Wheeler transform. Bioinformatics. 2009;25(14):1754–60. Langmead B, Trapnell C, Pop M, Salzberg SL. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009;10(3):25. Gagie T, Navarro G, Puglisi SJ. New algorithms on wavelet trees and applications to information retrieval. Theor Comput Sci. 2012;426/427:25–41. Sirén J, Välimäki N, Mäkinen V. Indexing graphs for path queries with applications in genome research. IEEE/ACM Trans Comput Biol Bioinformatics. 2014;11(2):375–88. Manber U, Myers GW. Suffix arrays: a new method for on-line string searches. SIAM J Sci Comput. 1993;22(5):935–48. Burrows M, Wheeler DJ. A block sorting lossless data compression algorithm. Technical Report 124, Digital Equipment Corporation, Palo Alto, California. 1994. Ferragina P, Manzini G. Indexing compressed text. J ACM. 2005;52(4):552–81. Gog S, Beller T, Moffat A, Petri M. From theory to practice: plug and play with succinct data structures. In: Proceedings of the 13th international symposium on experimental algorithms, (SEA). 2014. p. 326–37. The authors would like to thank Jouni Sirén for many insightful conversations concerning the GCSA. MDM, SJP, and CB were funded by the National Science Foundation (1618814). SJP was also supported in part by the Academy of Finland via Grant Number 294143. Department of Computer Science, Colorado State University, Fort Collins, CO, USA Martin D. Muggli Department of Computer Science, University of Helsinki, Helsinki, Finland Simon J. Puglisi Computer & Information Science & Engineering, University of Florida, Gainesville, FL, USA Christina Boucher Search for Martin D. Muggli in: Search for Simon J. Puglisi in: Search for Christina Boucher in: SJP, MDM, and CB conceived of the concept and designed the algorithm and data structures for the methods described in this paper. MDM and CB designed the experiments. MDM implemented the method, and performed all experiments and software testing. MDM and CB drafted the manuscript. All authors read and edited the manuscript. All authors read and approved the final manuscript. Correspondence to Christina Boucher. A preliminary version appeared in the proceedings of WABI 2018 Muggli, M.D., Puglisi, S.J. & Boucher, C. Kohdista: an efficient method to index and query possible Rmap alignments. Algorithms Mol Biol 14, 25 (2019) doi:10.1186/s13015-019-0160-9 Optical mapping Index based data structures FM-index Selected papers from WABI 2018
CommonCrawl
Forgotten username ? Hardy-Ramanujan Journal Volume 44 - Special Commemorative volume in honour of Srinivasa Ramanujan - 2021 1. Power partitions and a generalized eta transformation property Don Zagier. In their famous paper on partitions, Hardy and Ramanujan also raised the question of the behaviour of the number $p_s(n)$ of partitions of a positive integer~$n$ into $s$-th powers and gave some preliminary results. We give first an asymptotic formula to all orders, and then an exact formula, describing the behaviour of the corresponding generating function $P_s(q) = \prod_{n=1}^\infty \bigl(1-q^{n^s}\bigr)^{-1}$ near any root of unity, generalizing the modular transformation behaviour of the Dedekind eta-function in the case $s=1$. This is then combined with the Hardy-Ramanujan circle method to give a rather precise formula for $p_s(n)$ of the same general type of the one that they gave for~$s=1$. There are several new features, the most striking being that the contributions coming from various roots of unity behave very erratically rather than decreasing uniformly as in their situation. Thus in their famous calculation of $p(200)$ the contributions from arcs of the circle near roots of unity of order 1, 2, 3, 4 and 5 have 13, 5, 2, 1 and 1 digits, respectively, but in the corresponding calculation for $p_2(100000)$ these contributions have 60, 27, 4, 33, and 16 digits, respectively, of wildly varying sizes 2. Generating functions and congruences for 9-regular and 27-regular partitions in 3 colours Nayandeep Deka Baruah ; Hirakjyoti Das. Let $b_{\ell;3}(n)$ denote the number of $\ell$-regular partitions of $n$ in 3 colours. In this paper, we find some general generating functions and new infinite families of congruences modulo arbitrary powers of $3$ when $\ell\in\{9,27\}$. For instance, for positive integers $n$ and $k$, we have\begin{align*}b_{9;3}\left(3^k\cdot n+3^k-1\right)&\equiv0~\left(\mathrm{mod}~3^{2k}\right),\\b_{27;3}\left(3^{2k+3}\cdot n+\dfrac{3^{2k+4}-13}{4}\right)&\equiv0~\left(\mathrm{mod}~3^{2k+5}\right).\end{align*} 3. A variant of the Hardy-Ramanujan theorem M. Ram Murty ; V Kumar Murty. For each natural number $n$, we define $\omega^*(n)$ to be the number of primes $p$ such that $p-1$ divides $n$. We show that in contrast to the Hardy-Ramanujan theorem which asserts that the number $\omega(n)$ of prime divisors of $n$ has a normal order $\log\log n$, the function $\omega^*(n)$ does not have a normal order. We conjecture that for some positive constant $C$, $$\sum_{n\leq x} \omega^*(n)^2 \sim Cx(\log x). $$ Another conjecture related to this function emerges, which seems to be of independent interest. More precisely, we conjecture that for some constant $C>0$, as $x\to \infty$, $$\sum_{[p-1,q-1]\leq x} {1 \over [p-1, q-1]} \sim C \log x, $$ where the summation is over primes $p,q\leq x$ such that the least common multiple $[p-1,q-1]$ is less than or equal to $x$. 4. Explicit Values for Ramanujan's Theta Function ϕ(q) Bruce C Berndt ; Örs Rebák. This paper provides a survey of particular values of Ramanujan's theta function $\varphi(q)=\sum_{n=-\infty}^{\infty}q^{n^2}$, when $q=e^{-\pi\sqrt{n}}$, where $n$ is a positive rational number. First, descriptions of the tools used to evaluate theta functions are given. Second, classical values are briefly discussed. Third, certain values due to Ramanujan and later authors are given. Fourth, the methods that are used to determine these values are described. Lastly, an incomplete evaluation found in Ramanujan's lost notebook, but now completed and proved, is discussed with a sketch of its proof. 5. Truncated Series with Nonnegative Coefficients from the Jacobi Triple Product Liuquan Wang. Andrews and Merca investigated a truncated version of Euler's pentagonal number theorem and showed that the coefficients of the truncated series are nonnegative. They also considered the truncated series arising from Jacobi's triple product identity, and they conjectured that its coefficients are nonnegative. This conjecture was posed by Guo and Zeng independently and confirmed by Mao and Yee using different approaches. In this paper, we provide a new combinatorial proof of their nonnegativity result related to Euler's pentagonal number theorem. Meanwhile, we find an analogous result for a truncated series arising from Jacobi's triple product identity in a different manner. 6. Quantum q-series identities Jeremy Lovejoy. As analytic statements, classical $q$-series identities are equalities between power series for $|q|<1$. This paper concerns a different kind of identity, which we call a quantum $q$-series identity. By a quantum $q$-series identity we mean an identity which does not hold as an equality between power series inside the unit disk in the classical sense, but does hold on a dense subset of the boundary -- namely, at roots of unity. Prototypical examples were given over thirty years ago by Cohen and more recently by Bryson-Ono-Pitman-Rhoades and Folsom-Ki-Vu-Yang. We show how these and numerous other quantum $q$-series identities can all be easily deduced from one simple classical $q$-series transformation. We then use other results from the theory of $q$-hypergeometric series to find many more such identities. Some of these involve Ramanujan's false theta functions and/or mock theta functions. 7. Partition Identities for Two-Color Partitions George E Andrews. Three new partition identities are found for two-color partitions. The first relates to ordinary partitions into parts not divisible by 4, the second to basis partitions, and the third to partitions with distinct parts. The surprise of the strangeness of this trio becomes clear in the proof. 8. A survey on t-core partitions Hyunsoo Cho ; Byungchan Kim ; Hayan Nam ; Jaebum Sohn. $t$-core partitions have played important roles in the theory of partitions and related areas. In this survey, we briefly summarize interesting and important results on $t$-cores from classical results like how to obtain a generating function to recent results like simultaneous cores. Since there have been numerous studies on $t$-cores, it is infeasible to survey all the interesting results. Thus, we mainly focus on the roles of $t$-cores in number theoretic aspects of partition theory. This includes the modularity of $t$-core partition generating functions, the existence of $t$-core partitions, asymptotic formulas and arithmetic properties of $t$-core partitions, and combinatorial and number theoretic aspects of simultaneous core partitions. We also explain some applications of $t$-core partitions, which include relations between core partitions and self-conjugate core partitions, a $t$-core crank explaining Ramanujan's partition congruences, and relations with class numbers. 9. Generating Functions for Certain Weighted Cranks Shreejit Bandyopadhyay ; Ae Yee. Recently, George Beck posed many interesting partition problems considering the number of ones in partitions. In this paper, we first consider the crank generating function weighted by the number of ones and obtain analytic formulas for this weighted crank function under conditions of the crank being less than or equal to some specific integer. We connect these cumulative and point crank functions to the generating functions of partitions with certain sizes of Durfee rectangles. We then consider a generalization of the crank for $k$-colored partitions, which was first introduced by Fu and Tang, and investigate the corresponding generating function for this crank weighted by the number of parts in the first subpartition of a $k$-colored partition. We show that the cumulative generating functions are the same as the generating functions for certain unimodal sequences. 10. Filter integrals for orthogonal polynomials T Amdeberhan ; Adriana Duncan ; Victor H Moll ; Vaishavi Sharma. Motivated by an expression by Persson and Strang on an integral involving Legendre polynomials, stating that the square of $P_{2n+1}(x)/x$ integrated over $[-1,1]$ is always $2$, we present analog results for Hermite, Chebyshev, Laguerre and Gegenbauer polynomials as well as the original Legendre polynomial with even index. 11. Note on Artin's Conjecture on Primitive Roots Sankar Sitaraman. E. Artin conjectured that any integer $a > 1$ which is not a perfect square is a primitive root modulo $p$ for infinitely many primes $ p.$ Let $f_a(p)$ be the multiplicative order of the non-square integer $a$ modulo the prime $p.$ M. R. Murty and S. Srinivasan \cite{Murty-Srinivasan} showed that if $\displaystyle \sum_{p < x} \frac 1 {f_a(p)} = O(x^{1/4})$ then Artin's conjecture is true for $a.$ We relate the Murty-Srinivasan condition to sums involving the cyclotomic periods from the subfields of $\mathbb Q(e^{2\pi i /p})$ corresponding to the subgroups $<a> \subseteq \mathbb F_p^*.$ 12. Proof of the functional equation for the Riemann zeta-function Jay Mehta ; P. -Y Zhu. In this article, we shall prove a result which enables us to transfer from finite to infinite Euler products. As an example, we give two new proofs of the infinite product for the sine function depending on certain decompositions. We shall then prove some equivalent expressions for the functional equation, i.e. the partial fraction expansion and the integral expression involving the generating function for Bernoulli numbers. The equivalence of the infinite product for the sine functions and the partial fraction expansion for the hyperbolic cotangent function leads to a new proof of the functional equation for the Riemann zeta function. 13. The last chapter of the Disquisitiones of Gauss Laura Anderson ; Jasbir S Chahal ; Jaap Top. This exposition reviews what exactly Gauss asserted and what did he prove in the last chapter of {\sl Disquisitiones Arithmeticae} about dividing the circle into a given number of equal parts. In other words, what did Gauss claim and actually prove concerning the roots of unity and the construction of a regular polygon with a given number of sides. Some history of Gauss's solution is briefly recalled, and in particular many relevant classical references are provided which we believe deserve to be better known. 14. A Dynamical Proof of the Prime Number Theorem Redmond Mcnamara. We present a new, elementary, dynamical proof of the prime number theorem. Technical support Email About Episciences Hosted journals Episciences v1.0.39.1-f2769edf
CommonCrawl
Hypochlorous acid mechanism When hypochlorous acid is dissolved in water, is it the chlorine atom or the oxygen that serves as the anti microbial agent? aqueous-solution RichardbernsteinRichardbernstein $\begingroup$ You're talking about oxidation here, right? When you dissolve $\ce{HOCl}$ in water, it doesn't split up into the three constituent atoms. But there can be some redox reactions centered on one atom. Not sure if that's what causes the antimcrobial properties, though. $\endgroup$ – ManishEarth Dec 8 '12 at 7:10 $\begingroup$ Yes I'm talking about redox reactions, not just dissociation. $\endgroup$ – Richardbernstein Dec 8 '12 at 16:30 The chemical disinfection (inactivation of bacteria) in fresh water can occur through a number of mechanisms, including oxidation of cell walls, inactivation of key enzymes and disruption of nucleic acids, thereby rendering them non-functional. The precise mechanism of inactivation depends upon the nature of micro-organism (bacteria, spores, viruses). When chlorine was first used as a disinfectant in the USA in 1908 (slightly earlier in Europe), its germicidal power was commonly attributed to the liberation of 'nascent oxygen' from hypochlorous acid $(HOCl)$. $HOCl\rightarrow HCl+\frac{1}{2}O_{2}$ Subsequent investigations (Chang, 1944, Green and Stumpf, 1946) debunked this theory and showed that chlorine reacts irreversibly with the enzymatic system of bacteria, thereby killing it. Chang demonstrated by showing that hydrogen peroxide or potassium permanganate which release considerable amounts of nascent oxygen, nevertheless exhibit only weak germicidal activity. Chang was able to demonstrate that there is no liberation of oxygen involved in the inactivation of bacteria by chlorine. In any case, oxygen gas $(O_{2})$ is a much weaker oxidizing agent than hypochlorous acid, as shown in the table below. Hypochlorous acid is likely to be effective as both an oxidizing agent as well as direct chlorination of the protoplasm and reaction with lipoproteins to form toxic chloro compounts that interfere with cell division (Chang, 1944). Hypochlorous acid $(HOCl)$ dissociates in water to $H^{+}$ and hypochlorite ion $(OCl^{-})$. $HOCl\leftrightarrow H^{+} + OCl^{-}$ Because hypochlorous acid $(HOCl)$ is uncharged, it is better able to penetrate cell walls than other chlorine species and is about 80 times more effective than hypochlorite $(OCl^{-})$ at chlorination disinfection. The microbial disinfection due to total chlorine ($HOCl + OCl^{-}$) therefore depends upon pH and temperature. Whilst low pH and temperature favour disinfection by chlorine as $HOCl$, rate of diffusion through the cell membrane occurs faster at high temperature. The metabolic activity of the cell,rate of reaction of chlorine with enzymes and hence rate of inactivation of the cell is faster at higher temperature. Disinfection by redox reaction using ozone gas $(O_{3})$ (a fast and powerful oxisizing agent) is sometimes used as an alternative to chlorination to reduces or eliminate the toxicity caused by residual chlorine, especially where there is concern over organic matter in the water which may produce trichloromethanes. Chang, S.L., Destruction of microorganisms, J. Am. Water Works Assoc., 36 (1944), pp. 1192–1206 Green, D.E. and Stumpf, P.K., The mode of action of chlorine, J. Am. Water Work Assoc., 38 (1946), pp. 1301–1305 Water Quality Control Handbook, Second Edition, by E. Roberts Alley (2007, McGraw Hill). White's Handbook of Chlorination and Alternative Disinfectants, 5th Edition, by Black & Veatch Corp (2010, John Wiley & Sons) theotheo Not the answer you're looking for? Browse other questions tagged aqueous-solution or ask your own question. Bleach in drain cleaners Removing HCl from water chlorine formed in hydrochloric acid How is the dissolution of acetic acid that makes its aqueous solution a poor electrolyte? Why does CO2 lowers the pH of water below 7? Why isn't the hydroxyl group of citric acid deprotonated while the carboxyl groups are? What is happening to my chlorine dioxide when I add too much bleach or citric acid? What happens when ferric chloride is dissolved in water?
CommonCrawl
A DLI Index bermudagrass climate Japan Light I've shown the sum of the mean daily temperature over 2014 for four locations: Fukuoka and Tokyo in Japan, Holly Springs in Mississippi, and Watkinsville in Georgia. Fukuoka had the highest accumulated temperature, then Tokyo, then Watkinsville, and the coolest of those four locations was Holly Springs. If one were wanting to rank these transition zone locations for suitability of ultradwarf bermudagrass, the temperature is an important factor. Looking at temperature alone, it would seem that in 2014, Holly Springs would have been the worst of those locations for bermudagrass, and Fukuoka would have been the best for bermudagrass. But the light available for photosynthesis needs to be considered too. The photosynthetically active radiation (PAR) is reported as the daily light integral (DLI). For these same four cities, the cumulative DLI in 2014 has Watkinsville the highest, then Holly Springs, then Tokyo, and last is Fukuoka. So now, if one were ranking the locations by light, it would seem that Watkinsville would have been the best location in 2014 for ultradwarf bermudagrass, and Fukuoka would have been the worst. The daily mean temperature can be represented as a value between 0 and 1 – the temperature-based growth potential (GP). Plotting the cumulative sum of the C4 GP, one gets the same ranking of the locations at the end of the year, but on a different scale. Note another difference between the cumulative temperature and the cumulative GP plots. The slope of the GP plot is 0 (flat) when the temperatures are too cold for the grass to grow. This cumulative GP plot makes it easier to distinguish seasonal influence on growth than on the chart showing accumulated temperature. Can the DLI also be expressed as a value with a minimum of 0 and a maximum of 1, like the GP? Yes. One can express the actual DLI ($DLI_{actual}$) as a fraction of the maximum DLI ($DLI_{max}$) for that day and location. I'll call that the DLI index, and it will be expressed on a scale of 0 to 1. $$\text{DLI index} = \frac{DLI_{actual}}{DLI_{max}}$$ The $DLI_{max}$ varies based on latitude and day of the year. I've calculated a maximum DLI as 75% of the global solar radiation, as described here. The actual DLI will be the same $DLI_{max}$ on a perfectly clear day with no clouds. When there are clouds or other particles in the air that block some of the light from reaching the surface, the DLI will be lower than $DLI_{max}$. Rather than plotting the cumulative sum of DLI, one can plot the cumulative sum of the DLI index. The cumulative DLI index plot gives the same separation at the end of the year as in the cumulative DLI plot, just on a different scale. Unlike the cumulative GP plot, the DLI index doesn't have times of the year with a slope of 0. If it gets cold enough, grass will go dormant. That's what it means when the GP plot has a flat stretch. But DLI never gets that low. I've often been asked how to adjust the GP for light. And I usually respond that by explaining that temperature has more of an effect on turf growth than does light. So I usually prefer to just use the GP as an estimate of the potential for grass to grow. But if one wants to make an adjustment to GP for DLI, a reasonable way to do it is to multiply the GP by the DLI index.1 If we call the GP multiplied by the DLI index a growth index, it allows us to combine the light and temperature for a location to estimate their influence on growth. Remember that by cumulative temperature, Fukuoka was highest, and Holly Springs was lowest. By cumulative DLI, Watkinsville was highest, and Fukuoka was lowest. By plotting the cumulative growth index, which combines temperature and light, Watkinsville is highest, then Tokyo, Holly Springs, and Fukuoka. This plot is essentially taking the effect of temperature on growth and adjusting it for light, or conversely taking the light effect on growth and adjusting it for temperature. One can easily compare relative differences between locations on a daily, weekly, monthly, or annual basis. With the ease of measuring DLI at different locations on a property using quantum meters, one can also use this growth index to demonstrate the effect of tree or structural shade. For C3 grass, one may adjust the DLI index to account for the light saturation point. ↩︎ Is it normal to be cloudy like this? Four reasons zoysia should be a poor choice for California A summary of photosynthetically active radiation for 1 year at Tokyo Zoysia on putting greens? Why? Not the usual everyday maintenance "As clear as mud" Which affects growth more? Light or temperature?
CommonCrawl
Stability estimates for Navier-Stokes equations and application to inverse problems David M. Bortz 1, Department of Applied Mathematics, University of Colorado, Boulder, CO 80309-0526, United States Received October 2015 Revised February 2016 Published September 2016 We consider the class of two-lag linear delay differential equations and develop a series expansion to solve for the roots of the nonlinear characteristic equation. The expansion draws on results from complex analysis, combinatorics, special functions, and classical analysis for differential equations. Supporting numerical results are presented along with application of our method to study the stability of a two-lag model from ecology. Keywords: delay differential equations, Lambert W, exponential polynomials.. Mathematics Subject Classification: Primary: 34K99, 65H17; Secondary: 41A5. Citation: David M. Bortz. Characteristic roots for two-lag linear delay differential equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2409-2422. doi: 10.3934/dcdsb.2016053 M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, Dover, New York, NY, 1972. Google Scholar R. Aldrovandi, Special Matrices of Mathematical Physics: Stochastic, Circulant, and Bell Matrices, World Scientific, Singapore, 2001. doi: 10.1142/9789812799838. Google Scholar F. M. Asl and A. G. Ulsoy, Analysis of a system of linear delay differential equations, J. Dyn. Syst. Meas. Control, 125 (2003), 215-223. doi: 10.1115/1.1568121. Google Scholar F. M. Asl and A. G. Ulsoy, Closure to "Discussion of 'Analysis of a system of linear delay differential equations' '' (2007, ASME J. Dyn. Syst., Meas., Control, 129, pp. 121-122), J. Dyn. Syst. Meas. Control, 129 (2007), 123. Google Scholar C. E. Avellar and J. K. Hale, On the zeros of exponential polynomials, J. Math. Anal. Appl., 73 (1980), 434-452. doi: 10.1016/0022-247X(80)90289-9. Google Scholar B. Balachandran, T. Kalmár-Nagy and D. E. Gilsinn (eds.), Delay Differential Equations: Recent Advances and New Directions, Springer US, Boston, MA, 2009. Google Scholar H. T. Banks, D. M. Bortz and S. E. Holte, Incorporation of variability into the modeling of viral delays in HIV infection dynamics, Math. Biosci., 183 (2003), 63-91. doi: 10.1016/S0025-5564(02)00218-3. Google Scholar E. T. Bell, Partition polynomials, Ann. Math., 29 (1927), 38-46. doi: 10.2307/1967979. Google Scholar E. T. Bell, Exponential polynomials, Ann. Math., 35 (1934), 258-277. doi: 10.2307/1968431. Google Scholar R. Bellman and K. L. Cooke, Differential-Difference Equations, Mathematis in Science and Engineering, Academic Press, New York, NY, 1963. Google Scholar F. G. Boese, Stability Criteria for Second-Order Dynamical Systems Involving Several Time Delays, SIAM J. Math. Anal., 26 (1995), 1306-1330. doi: 10.1137/S0036141091200848. Google Scholar D. M. Bortz, Eigenvalues for a two-lag linear delay differential equation, IFAC-PapersOnLine, 48 (2015), 13-16. Google Scholar R. D. Braddock and P. van den Driessche, On a two lag differential delay equation, J. Aust. Math. Soc. Ser. B Appl. Math., 24 (1983), 292-317. doi: 10.1017/S0334270000002939. Google Scholar J. W. Brown and R. V. Churchill, Complex Variables and Applications, 6th edition, McGraw-Hill Book Company, Inc., New York, NY, 1996. Google Scholar T. C. Busken and J. M. Mahaffy, Regions of stability for a linear differential equation with two rationally dependent delays, Discrete Contin. Dyn. Syst., 35 (2015), 4955-4986. doi: 10.3934/dcds.2015.35.4955. Google Scholar C. Carathéodory, Theory of Functions of a Complex Variable, vol. 97 of AMS Chelsea Publishing Series, 2nd edition, AMS Chelsea Pub., Rhode Island, 2001. Google Scholar L. Comtet, Advanced Combinatorics, D. Reidel Publishing Company, Dordrecht, Holland, 1974. Google Scholar K. L. Cooke and P. van den Driessche, Analysis of an SEIRS epidemic model with two delays, J. Math. Biol., 35 (1996), 240-260. doi: 10.1007/s002850050051. Google Scholar K. L. Cooke and P. van den Driessche, On zeros of some transcendental functions, Eunkcialaj Ekvacioj, 29 (1986), 77-90. Google Scholar R. M. Corless, G. H. Gonnet, D. E. G. Hare, D. J. Jeffrey and D. E. Knuth, On the LambertW function, Adv. Comput. Math., 5 (1996), 329-359. doi: 10.1007/BF02124750. Google Scholar R. M. Corles, D. J. Jeffrey and D. E. Knuth, A sequence of series for the Lambert w function, in Proceedings of the 1997 international symposium on Symbolic and algebraic computation, ACM Press, 1997, 197-204. doi: 10.1145/258726.258783. Google Scholar J. M. Cushing, Integrodifferential Equations and Delay Models in Population Dynamics, vol. 20 of Lecture Notes in Biomathematics, Springer-Verlag, New York, NY, 1977. Google Scholar N. D. de Bruijn, Asymptotic Methods in Analysis, North Holland, The Netherlands, 1958. Google Scholar W. Deng, C. Li and J. Lü, Stability analysis of linear fractional differential system with multiple time delays, Nonlinear Dyn., 48 (2007), 409-416. doi: 10.1007/s11071-006-9094-0. Google Scholar O. Diekmann, S. A. van Gils, S. M. Verduyn Lunel and H.-O. Walther, Delay Equations Functional-, Complex-, and Nonlinear Analysis, vol. 110 of Applied Mathematical Sciences, Springer-Verlag, New York, NY, 1995. doi: 10.1007/978-1-4612-4206-2. Google Scholar L. E. El'sgol'ts and S. B. Norkin, Introduction to the Theoryand Application of Differential Equations with Deviating Arguments, vol. 105 of Mathematics in Science and Engineering, Academic Press, New York, NY, 1973. Google Scholar L. H. Encinas, A. M. del Rey and J. M. Masqué, Faà di Bruno's formula, lattices, and partitions, Discrete Appl. Math., 148 (2005), 246-255. doi: 10.1016/j.dam.2005.02.009. Google Scholar T. Erneux, Applied Delay Differential Equations, vol. 3 of Surveys and Tutorials in Applied Mathematical Sciences, Springer New York, New York, NY, 2009. Google Scholar J. Forde and P. Nelson, Applications of Sturm sequences to bifurcation analysis of delay differential equation models, J. Math. Anal. Appl., 300 (2004), 273-284. doi: 10.1016/j.jmaa.2004.02.063. Google Scholar K. Gopalsamy, Stability and Oscillations in Delay Differential Equations of Population Dynamics, vol. 74 of Mathematics and Its Applications, Springer, New York, NY, 1992. doi: 10.1007/978-94-015-7920-9. Google Scholar K. Gu and S.-I. Niculescu, Survey on recent results in the stability and control of time-delay systems, J. Dyn. Syst. Meas. Control, 125 (2003), 158-165. doi: 10.1115/1.1569950. Google Scholar J. K. Hale and W. Huang, Global geometry of the stable regions for two delay differential equations, J. Math. Anal. Appl., 178 (1993), 344-362. doi: 10.1006/jmaa.1993.1312. Google Scholar J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations, vol. 99 of Applied Mathematical Sciences, Springer-Verlag, New York, NY, 1993. doi: 10.1007/978-1-4612-4342-7. Google Scholar E. Jarlebring, Critical delays and polynomial eigenvalue problems, J. Comput. Appl. Math., 224 (2009), 296-306. doi: 10.1016/j.cam.2008.05.004. Google Scholar E. Jarlebring and T. Damm, The Lambert W function and the spectrum of some multidimensional time-delay systems, Automatica, 43 (2007), 2124-2128. doi: 10.1016/j.automatica.2007.04.001. Google Scholar W. P. Johnson, The curious history of Faà di Bruno's formula, Am. Math. Mon., 109 (2002), 217-234. doi: 10.2307/2695352. Google Scholar F. A. Khasawneh and B. P. Mann, A spectral element approach for the stability analysis of time-periodic delay equations with multiple delays, Commun. Nonlinear Sci. Numer. Simul., 18 (2013), 2129-2141. doi: 10.1016/j.cnsns.2012.11.030. Google Scholar S. M. Kissler, C. Cichowitz, S. Sankaranarayanan and D. M. Bortz, Determination of personalized diabetes treatment plans using a two-delay model, J. Theor. Biol., 359 (2014), 101-111. doi: 10.1016/j.jtbi.2014.06.005. Google Scholar D. E. Knuth, Two notes on notation, Am. Math. Mon., 99 (1992), 403-422. doi: 10.2307/2325085. Google Scholar Y. Kuang, Delay Differential Equations With Applications in Population Dynamics, vol. 191 of Mathematics in Science and Engineering, Academic Press, New York, NY, 1993. Google Scholar J. Li and Y. Kuang, Analysis of a Model of the Glucose-Insulin Regulatory System with Two Delays, SIAM J. Appl. Math., 67 (2007), 757-776. doi: 10.1137/050634001. Google Scholar X. Li, S. Ruan and J. Wei, Stability and Bifurcation in Delay-Differential Equations with Two Delays, J. Math. Anal. Appl., 236 (1999), 254-280. doi: 10.1006/jmaa.1999.6418. Google Scholar J. J. Loiseau, W. Michiels, S.-I. Niculescu and R. Sipahi, Topics in Time Delay Systems: Analysis, Algorithms and Control, Lecture Notes in Control and Information Sciences, 388. Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-642-02897-7. Google Scholar X. Long, T. Insperger and B. Balachandran, Systems with periodic coefficients and periodically varying delays: Semidiscretization-based stability analysis, in Delay Differential Equations (eds. D. E. Gilsinn, T. Kalmár-Nagy and B. Balachandran), Springer US, Boston, MA, 2009, 131-153. doi: 10.1007/978-0-387-85595-0_5. Google Scholar N. MacDonald, Biological Delay Systems: Linear Stability Theory, vol. 8 of Cambridge Studies in Math. Biology, Cambridge University Press, Cambridge, UK, 1989. Google Scholar J. M. Mahaffy, K. M. Joiner and P. J. Zak, A geometric analysis of stability regions for a linear differential equation with two delays, Int. J. Bifurc. Chaos, 5 (1995), 779-796. doi: 10.1142/S0218127495000570. Google Scholar D. Michie, "Memo'' functions and machine learning, Nature, 218 (1968), p306. doi: 10.1038/218306c0. Google Scholar R. L. Mishkov, Generalization of the formula of Faa di Bruno for a composite function with a vector argument, Int. J. Math. Math. Sci., 24 (2000), 481-491. doi: 10.1155/S0161171200002970. Google Scholar C. J. Moreno, The zeros of exponential polynomials (I), Compsitio Math., 26 (1973), 69-78. Google Scholar L. Olien and J. Bélair, Bifurcations, stability, and monotonicity properties of a delayed neural network model, Phys. Nonlinear Phenom., 102 (1997), 349-363. doi: 10.1016/S0167-2789(96)00215-1. Google Scholar J.-P. Richard, Time-delay systems: An overview of some recent advances and open problems, Automatica, 39 (2003), 1667-1694. doi: 10.1016/S0005-1098(03)00167-5. Google Scholar J. F. Ritt, On the zeros of exponential polynomials, Trans. Am. Math. Soc., 31 (1929), 680-686. doi: 10.1090/S0002-9947-1929-1501506-6. Google Scholar S. Ruan, Chapter 11: Delay differential equations in single species dynamics, in Delay Differential Equations and Applications (eds. O. Arino, M. Hbid and E. A. Dads), vol. 205 of NATO Science Series, Springer Netherlands, Dordrecht, Holland, 2006, 477-517. doi: 10.1007/1-4020-3647-7_11. Google Scholar S. Ruan and J. Wei, On the zeros of transcendental functions with applications to stability of delay differential equations with two delays, Dyn. Contin. Discrete Impuls. Syst. Ser. A, 10 (2003), 863-874. Google Scholar G. Samaey and B. Sandstede, Determining stability of pulses for partial differential equations with time delays, Dyn. Syst., 20 (2005), 201-222. doi: 10.1080/14689360500035693. Google Scholar Y. Sasaki, On zeros of exponential polynomials and quantum algorithms, Quantum Inf. Process., 9 (2010), 419-427. doi: 10.1007/s11128-009-0148-3. Google Scholar R. Sipahi and N. Olgac, Stability intricacies of two-delay linear systems in the presence of delay cross-talk, IET Control Theory Appl., 5 (2011), 990-998. doi: 10.1049/iet-cta.2010.0162. Google Scholar Rifat Sipahi, Tomáš Vyhlídal, Silviu-Iulian Niculescu and Pierdomenico Pepe (eds.), Time Delay Systems: Methods, Applications and New Trends, vol. 423 of Lecture Notes in Control and Information Sciences, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. doi: 10.1007/978-3-642-25221-1. Google Scholar H. Smith, An Introduction to Delay Differential Equations with Applications to the Life Sciences, vol. 57 of Texts in Applied Mathematics, Springer New York, New York, NY, 2011. doi: 10.1007/978-1-4419-7646-8. Google Scholar P. K. Srivastava, M. Banerjee and P. Chandra, A primary infection model for hiv and immune response with two discrete time delays, Differ. Equ. Dyn. Syst., 18 (2010), 385-399. doi: 10.1007/s12591-010-0074-y. Google Scholar G. Stépán, Retarded Dynamical Systems: Stability and Characteristic Functions, vol. 210 of Pitman research notes in mathematics series, Longman Scientific andTechnical, Harlow, 1989. Google Scholar J. Wei and S. Ruan, Stability and bifurcation in a neural network model with two delays, Phys. Nonlinear Phenom., 130 (1999), 255-272. doi: 10.1016/S0167-2789(99)00009-3. Google Scholar J. Wei and Y. Yuan, Synchronized Hopf bifurcation analysis in a neural network model with delays, J. Math. Anal. Appl., 312 (2005), 205-229. doi: 10.1016/j.jmaa.2005.03.049. Google Scholar F. S. Wheeler, Bell polynomials, ACM SIGSAM Bull., 21 (1987), 44-53. doi: 10.1145/29309.29317. Google Scholar S. Yi, P. W. Nelson and A. G. Ulsoy, Time-Delay Systems: Analysis and Control Using the Lambert W Function, World Scientific Press, London, UK, 2010. doi: 10.1142/9789814307406. Google Scholar Y. Yuan and J. Bélair, Stability and hopf bifurcation analysis for functional differential equation with distributed delay, SIAM J. Appl. Dyn. Syst., 10 (2011), 551-581. doi: 10.1137/100794493. Google Scholar N. Zafer, Discussion: "Analysis of a system of linear delay differential equations'' (Asl, F. M., and Ulsoy, A. G., 2003, ASME J. Dyn. Syst., Meas., Control, 125, pp. 215-223), J. Dyn. Syst. Meas. Control, 129 (2007), 121-122. doi: 10.1115/1.2428282. Google Scholar Sun Yi, Patrick W. Nelson, A. Galip Ulsoy. Delay differential equations via the matrix lambert w function and bifurcation analysis: application to machine tool chatter. Mathematical Biosciences & Engineering, 2007, 4 (2) : 355-368. doi: 10.3934/mbe.2007.4.355 Sana Netchaoui, Mohamed Ali Hammami, Tomás Caraballo. Pullback exponential attractors for differential equations with delay. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1345-1358. doi: 10.3934/dcdss.2020367 Tomás Caraballo, José Real, T. Taniguchi. The exponential stability of neutral stochastic delay partial differential equations. Discrete & Continuous Dynamical Systems, 2007, 18 (2&3) : 295-313. doi: 10.3934/dcds.2007.18.295 Luis Barreira, Claudia Valls. Delay equations and nonuniform exponential stability. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 219-223. doi: 10.3934/dcdss.2008.1.219 Yejuan Wang, Lin Yang. Global exponential attraction for multi-valued semidynamical systems with application to delay differential equations without uniqueness. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1961-1987. doi: 10.3934/dcdsb.2018257 Junya Nishiguchi. On parameter dependence of exponential stability of equilibrium solutions in differential equations with a single constant delay. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5657-5679. doi: 10.3934/dcds.2016048 Janusz Mierczyński, Sylvia Novo, Rafael Obaya. Principal Floquet subspaces and exponential separations of type Ⅱ with applications to random delay differential equations. Discrete & Continuous Dynamical Systems, 2018, 38 (12) : 6163-6193. doi: 10.3934/dcds.2018265 Pham Huu Anh Ngoc. New criteria for exponential stability in mean square of stochastic functional differential equations with infinite delay. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021040 Eugenii Shustin. Exponential decay of oscillations in a multidimensional delay differential system. Conference Publications, 2003, 2003 (Special) : 809-816. doi: 10.3934/proc.2003.2003.809 Josef Diblík, Zdeněk Svoboda. Asymptotic properties of delayed matrix exponential functions via Lambert function. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 123-144. doi: 10.3934/dcdsb.2018008 Michael Dellnitz, Mirko Hessel-Von Molo, Adrian Ziessler. On the computation of attractors for delay differential equations. Journal of Computational Dynamics, 2016, 3 (1) : 93-112. doi: 10.3934/jcd.2016005 Hermann Brunner, Stefano Maset. Time transformations for delay differential equations. Discrete & Continuous Dynamical Systems, 2009, 25 (3) : 751-775. doi: 10.3934/dcds.2009.25.751 Klaudiusz Wójcik, Piotr Zgliczyński. Topological horseshoes and delay differential equations. Discrete & Continuous Dynamical Systems, 2005, 12 (5) : 827-852. doi: 10.3934/dcds.2005.12.827 Michael Scheutzow. Exponential growth rate for a singular linear stochastic delay differential equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1683-1696. doi: 10.3934/dcdsb.2013.18.1683 Yu-Zhao Wang. $ \mathcal{W}$-Entropy formulae and differential Harnack estimates for porous medium equations on Riemannian manifolds. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2441-2454. doi: 10.3934/cpaa.2018116 Mohamed Ali Hammami, Lassaad Mchiri, Sana Netchaoui, Stefanie Sonner. Pullback exponential attractors for differential equations with variable delays. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 301-319. doi: 10.3934/dcdsb.2019183 Serhiy Yanchuk, Leonhard Lücken, Matthias Wolfrum, Alexander Mielke. Spectrum and amplitude equations for scalar delay-differential equations with large delay. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 537-553. doi: 10.3934/dcds.2015.35.537 Nicola Guglielmi, Christian Lubich. Numerical periodic orbits of neutral delay differential equations. Discrete & Continuous Dynamical Systems, 2005, 13 (4) : 1057-1067. doi: 10.3934/dcds.2005.13.1057 Eduardo Liz, Gergely Röst. On the global attractor of delay differential equations with unimodal feedback. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1215-1224. doi: 10.3934/dcds.2009.24.1215 David M. Bortz
CommonCrawl
Global food insecurity and famine from reduced crop, marine fishery and livestock production due to climate disruption from nuclear war soot injection Food production shocks across land and sea Richard S. Cottrell, Kirsty L. Nash, … Julia L. Blanchard Risk of increased food insecurity under stringent global climate change mitigation policy Tomoko Hasegawa, Shinichiro Fujimori, … Peter Witzke Extreme climate events increase risk of global food insecurity and adaptation needs Tomoko Hasegawa, Gen Sakurai, … Toshihiko Masui China's future food demand and its implications for trade and environment Hao Zhao, Jinfeng Chang, … Michael Obersteiner High energy and fertilizer prices are more damaging than food export curtailment from Ukraine and Russia for food prices, health and the environment Peter Alexander, Almut Arneth, … Mark D. A. Rounsevell Impacts of climate change and extreme weather on food supply chains cascade across sectors and regions in Australia Arunima Malik, Mengyu Li, … Mikhail Prokopenko A multi-model assessment of food security implications of climate change mitigation Shinichiro Fujimori, Tomoko Hasegawa, … Detlef van Vuuren Dietary change in high-income nations alone can lead to substantial double climate dividend Zhongxiao Sun, Laura Scherer, … Paul Behrens Feeding ten billion people is possible within four terrestrial planetary boundaries Dieter Gerten, Vera Heck, … Hans Joachim Schellnhuber Lili Xia ORCID: orcid.org/0000-0001-7821-97561, Alan Robock ORCID: orcid.org/0000-0002-6319-56561, Kim Scherrer ORCID: orcid.org/0000-0001-6198-57452,3, Cheryl S. Harrison4, Benjamin Leon Bodirsky ORCID: orcid.org/0000-0002-8242-67125,6, Isabelle Weindl ORCID: orcid.org/0000-0002-7651-69305, Jonas Jägermeyr ORCID: orcid.org/0000-0002-8368-00185,7,8, Charles G. Bardeen9, Owen B. Toon10 & Ryan Heneghan11 Nature Food volume 3, pages 586–596 (2022)Cite this article 1796 Altmetric Climate-change impacts Atmospheric soot loadings from nuclear weapon detonation would cause disruptions to the Earth's climate, limiting terrestrial and aquatic food production. Here, we use climate, crop and fishery models to estimate the impacts arising from six scenarios of stratospheric soot injection, predicting the total food calories available in each nation post-war after stored food is consumed. In quantifying impacts away from target areas, we demonstrate that soot injections larger than 5 Tg would lead to mass food shortages, and livestock and aquatic food production would be unable to compensate for reduced crop output, in almost all countries. Adaptation measures such as food waste reduction would have limited impact on increasing available calories. We estimate more than 2 billion people could die from nuclear war between India and Pakistan, and more than 5 billion could die from a war between the United States and Russia—underlining the importance of global cooperation in preventing nuclear war. Extraordinary events such as large volcanic eruptions or nuclear war could cause sudden global climate disruptions and affect food security. Global volcanic cooling caused by sulfuric acid aerosols in the stratosphere has resulted in severe famines and political instability, for example, after the 1783 Laki eruption in Iceland1 or the 1815 Tambora eruption in Indonesia2,3. For a nuclear war, the global cooling would depend on the yields of the weapons, the number of weapons and the targets, among other atmospheric and geographic factors. In a nuclear war, bombs targeted on cities and industrial areas would start firestorms, injecting large amounts of soot into the upper atmosphere, which would spread globally and rapidly cool the planet4,5,6. Such soot loadings would cause decadal disruptions in Earth's climate7,8,9, which would impact food production systems on land and in the oceans. In the 1980s, there were investigations of nuclear winter impacts on global agricultural production10 and food availability11 for 15 nations, but new information now allows us to update those estimates. Several studies have recently analysed changes of major grain crops12,13,14 and marine wild catch fisheries15 for different scenarios of regional nuclear war using climate, crop and fishery models. A war between India and Pakistan, which recently are accumulating more nuclear weapons with higher yield16, could produce a stratospheric loading of 5–47 Tg of soot. A war between the United States, its allies and Russia—who possess more than 90% of the global nuclear arsenal—could produce more than 150 Tg of soot and a nuclear winter4,5,6,7,8,9. While amounts of soot injection into the stratosphere from the use of fewer nuclear weapons would have smaller global impacts17, once a nuclear war starts, it may be very difficult to limit escalation18. The scenarios we studied are listed in Table 1. Each scenario assumes a nuclear war lasting one week, resulting in the number and yield of nuclear weapons shown in the table and producing different amounts of soot in the stratosphere. There are many war scenarios that could result in similar amounts of smoke and thus similar climate shocks, including wars involving the other nuclear-armed nations (China, France, United Kingdom, North Korea and Israel). Table 1 Number of weapons on urban targets, yields, direct fatalities from the bomb blasts and resulting number of people in danger of death due to famine for the different scenarios we studied Recent catastrophic forest fires in Canada in 201719 and Australia in 2019 and 202020,21 produced 0.3–1 Tg of smoke (0.006–0.02 Tg soot), which was subsequently heated by sunlight and lofted high in the stratosphere. The smoke was transported around the world and lasted for many months. This adds confidence to our simulations that predict the same process would occur after nuclear war. Nuclear war would primarily contaminate soil and water close to where nuclear weapons were used22. Soot disperses globally once it reaches the upper atmosphere; thus, our results are globally relevant regardless of the warring nations. Here, we focus on the climate disruption from nuclear war, which would impact global food production systems on land and in the oceans. So far, an integrated estimate of the impacts of the entire range of war scenarios on both land- and ocean-based food production is missing. We examine the impacts of six war scenarios, generating 5 Tg to 150 Tg of soot, on the food supply (Table 1). We use model simulations of major crops and wild-caught marine fish together with estimated changes in other food and livestock production to assess the impacts on global calorie supply. Impacts on crops and fish catch productivity Using climate, crop and fishery models (Methods), we calculate calorie production for different food groups, for each year after a range of six different stratospheric soot injections. The climatic impacts would last for about a decade but would peak in the first few years (Fig. 1). Fig. 1: Climatic impacts by year after different nuclear war soot injections. a–f, Changes in surface temperature (a), solar radiation (c) and precipitation (e) averaged over global crop regions of 2000 (Supplementary Fig. 1) and sea surface temperature (b), solar radiation (d) and net primary productivity (f) over the oceans following the six stratospheric soot-loading scenarios studied here for 15 years following a nuclear war, derived from simulations in ref. 18. These variables are the direct climate forcing for the crop and fishery models. The left y axes are the anomalies of monthly climate variables from simulated nuclear war minus the climatology of the control simulation, which is the average of 45 years of simulation. The right y axes are the percentage change relative to the control simulation. The wars take place on 15 May of Year 1, and the year labels are on 1 January of each year. For comparison, during the last Ice Age 20,000 years ago, global average surface temperatures were about 5 °C cooler than present. Ocean temperatures decline less than for crops because of the ocean's large heat capacity. Ocean solar radiation loss is less than for crops because most ocean is in the Southern Hemisphere, where slightly less smoke is present. Global average calorie production from the crops we simulated decreased 7% in years 1–5 after the war even under the smallest, 5 Tg soot scenario (Fig. 2a; comparable to previous multi-model results14, Supplementary Fig. 2) and up to 50% under the 47 Tg scenario. In the 150 Tg soot case, global average calorie production from crops would decrease by around 90% 3–4 years after the nuclear war. The changes would induce a catastrophic disruption of global food markets, as even a 7% global yield decline compared with the control simulation would exceed the largest anomaly ever recorded since the beginning of Food and Agricultural Organization (FAO) observational records in 196114. Fig. 2: Calorie production changes for crops and fish, and accumulated carbon change for grasses following different nuclear war soot injections. a–c, Global average annual crop calorie production changes (%; maize, wheat, rice and soybeans, weighted by their observed production (2010) and calorie content; a), marine fish production changes (%; b) and combined crop and fish calorie production changes (%; c) after nuclear war for the different soot-injection scenarios. d, Grass leaf carbon is a combination of C3 and C4 grasses, and the change is calculated as annual accumulated carbon. For context, the grey line (and shaded area) in a are the average (and standard deviation) of six crop models from the Global Gridded Crop Model Intercomparison (GGCMI, ref. 14) under the 5 Tg scenario. CLM5crop shows a conservative response to nuclear war compared with the multi-model GGCMI response. Fish are another important food resource, especially in terms of protein supply. Nuclear war would reduce the wild fish catch15, but the reduction would be less than for land agriculture (Fig. 2b), because reduction in oceanic net primary productivity—the base of the marine food web—is moderate (from 3% in 5 Tg to 37% in 150 Tg), and ocean temperature changes are less pronounced (Fig. 1). Terrestrial crop production dominates the total calorie change of crops and fisheries combined (Fig. 2c), because global crop production is 24 times higher than wild fisheries in terms of dry matter, and staple crops contain around five times more calories than fish per unit retail mass23,24. In total, marine wild capture fisheries contribute 0.5% of total calories but 3.5% of global average protein supply (Fig. 3 and Supplementary Fig. 3). Fig. 3: Global average human diet and protein composition and usage of crop-based products. a, Global average human diet composition23. Percentages are % of available calories. Veg. is vegetables. b, Global average human protein diet composition23. Marine wild capture contributes 75% of marine fish46. Percentages are % of dry matter production. c, Distribution of four major cereal crops and marine fish between human food and other uses24,47. Percentages are % of dry matter production. d, Usage of crop-based products in 2010 (% of dry matter crop-based production)26. The colour gradient legend in grey in c illustrates the usage of different crops and fish in colours. While humans consume most of the wheat and rice grown, most maize and soybeans are used for livestock feed. Cooling from nuclear wars causes temperature limitations for crops, leading to delayed physiological maturity and additional cold stress14. Calorie reduction from agriculture and marine fisheries shows regional differences (Supplementary Fig. 4), with the strongest percentage reductions over high latitudes in the Northern Hemisphere. Even for the India–Pakistan case, many regions become unsuitable for agriculture for multiple years. For example, in the 27 Tg case, mid- to high latitudes of the Northern Hemisphere show reductions in crop calorie production greater than 50%, along with fish catch reductions of 20–30%. The nuclear-armed nations in mid- to high latitude regions (China, Russia, United States, France, North Korea and United Kingdom) show calorie reductions from 30% to 86%, and in lower latitudes (India, Pakistan and Israel), the reduction is less than 10% (Supplementary Tables 1 and 2). Impacts in warring nations are likely to be dominated by local problems, such as infrastructure destruction, radioactive contamination and supply chain disruptions, so the results here apply only to indirect effects from soot injection in remote locations. Impacts on total human calorie intake To estimate the effect on the total food calories available for human consumption, we consider diet composition, calorie content of different food types, crop usage and changes in food production that we did not directly model (Methods). In 2010, FAO23 reported that 51% of global calorie availability came from cereals, 31% from vegetables, fruit, roots, tubers and nuts and 18% from animal and related products, of which fish contributed 7%, with marine wild catch contributing 3% (Fig. 3a). The crops and fish we simulated provide almost half of these calories and 40% of the protein. Further, only portions of the simulated food production are available for human consumption. Many crops (for example, maize and soybean) are used mainly for non-food uses such as livestock feed (Fig. 3c). In addition, the total number of calories available as food is highly dependent on human reactions to nuclear wars. We assume that international trade in food is suspended as food-exporting nations halt exports in response to declining food production (Methods). Furthermore, we considered three societal responses, Livestock, Partial Livestock and No Livestock (Supplementary Table 3). For the Livestock response scenario, representing a minimal adaptation to the climate-driven reduction in food production, people continue to maintain livestock and fish as normal. Although harvesting a larger share of crop residues for feed or adding new feed such as insect-based supplements may increase the potential livestock feed, we assume that no new feed supplements are added and the ratio of agricultural grains, residues and grazed biomass to livestock feed is the same. Calories from all crops are reduced by the average reduction in our four simulated crops, and calorie changes from marine wild-caught fish are calculated with business-as-usual fishing behaviour. The No Livestock response represents a scenario where livestock (including dairy and eggs) and aquaculture production are not maintained after the first year, and the national fractions of crop production previously used as feed are now available to feed humans. In addition, fishing pressure intensifies, simulated through a fivefold increase in fish price15. Similar responses took place in New England in the 'year without a summer' after the 1815 Tambora volcanic eruption2. Even though the temperature changes were smaller than modelled in any of the nuclear war scenarios here, crop failures forced farmers to sell their livestock because they could not feed them3, and previously unpalatable fish were added to their diet2,3,25. We test a full range (0–100%) of the fraction of food-competing feed26 that could be used by humans and select 50% as an example in some plots and tables. Between the Livestock and No Livestock cases, we also consider a Partial Livestock case, in which the remaining portion of livestock grain feed after converting to human consumption would be used for raising livestock. Final biofuel products (biodiesel and ethanol) only count for 0.5% of the plant-based products27, which could be repurposed as food in the form of plant oil (~1.8% of total food calories) and alcohol (3.4% of total food calories). Byproducts of biofuel have been added to livestock feed and waste27. Therefore, we add only the calories from the final product of biofuel in our calculations. Global averaged household waste is around 20% (ref. 28). If we assume that after a nuclear war there would be 50% less or 100% less household waste, these extra calories would become available. National consequences of calorie loss depend on the amount of fallow cropland, regional climate impacts, population levels and assuming a complete halt of international food trade (Methods; Fig. 4). Here, we focus on two calorie intake levels in nations: calorie intake to maintain normal physical activity and calorie intake lower than the basal metabolic rate (also known as the resting energy expenditure)29. The two levels vary in countries depending on the composition and physical activity of the population. Food consumption of less than the first level would not allow a person to maintain their normal physical activity and keep their weight at the same time, and less than the basal metabolic rate would cause fast weight loss even with only sedentary activity and thus would quickly lead to death29. With a 5 Tg injection, most nations show decreasing calorie intake relative to the 2010 level but still sufficient to maintain weight (Fig. 4 and Supplementary Fig. 5). With larger soot-injection cases, severe starvation occurs in most of the mid–high latitude nations under the Livestock Case. When 50% of food-competing feed is converted for human consumption in each nation, some nations (such as the United States) would maintain sufficient calorie intake under scenarios with smaller soot injections, but weight loss or even severe starvation would occur under larger soot-injection cases (Fig. 4 and Supplementary Fig. 5). Fig. 4: Food intake (kcal per capita per day) in Year 2 after different nuclear war soot injections. The left map is the calorie intake status in 2010 with no international trade; the left column is the Livestock case; the middle column is the Partial Livestock case, with 50% of livestock feed used for human food and the other 50% still used to feed livestock; and the right column is the No Livestock case, with 50% of livestock feed used for human food. All maps assume no international trade and that the total calories are evenly distributed within each nation. Regions in green mean food consumption can support the current physical activity in that country; regions in yellow are calorie intake that would cause people to lose weight, and only sedentary physical activity would be supported; and regions in red indicate that daily calorie intake would be less than needed to maintain a basal metabolic rate (also called resting energy expenditure) and thus would lead to death after an individual exhausted their body energy reserves in stored fat and expendable muscle28. 150 Tg + 50% waste is half of the household waste added to food consumption, and 150 Tg + 100% waste is all household waste added to food consumption. Under the 150 Tg scenario, most nations would have calorie intake lower than resting energy expenditure29. One exception is Australia. After we turn off international trade, wheat contributes almost 50% of the calorie intake in Australia, and production of rice, maize and soybean in Australia are less than 1% that of wheat23,24. Therefore, the wheat response to simulated nuclear wars largely determines calorie intake in Australia. Because spring wheat is used to represent wheat, and simulated spring wheat there shows increasing or small reductions under nuclear war scenarios in which more favourable temperatures occur for food production, the calorie intake in Australia is more than other nations. However, this analysis is limited by the FAO data, which are collected at national levels. Within each nation, particularly large ones, there may be large regional inequities driven by infrastructure limitations, economic structures and government policies. New Zealand would also experience smaller impacts than other countries. But if this scenario should actually take place, Australia and New Zealand would probably see an influx of refugees from Asia and other countries experiencing food insecurity. The global average calorie supply post-war (Fig. 5a) implies that extreme regional reductions (Fig. 4) could be overcome to some extent through trade—but equal distribution of food globally would probably be a major challenge. Under the Livestock case, if food were evenly distributed over the world and household waste were 20% as in 201028, there would be enough food for everyone under the 5 Tg scenario to support their normal physical activity; if household waste were reduced from 20% to 10%, extra calories would support everyone under the 16 Tg scenario; and if there were no household waste, even under the 27 Tg case, everyone would consume sufficient calories for survival. With the most optimistic case—100% livestock crop feed to humans, no household waste and equitable global food distribution—there would be enough food production for everyone under the 47 Tg case. Assuming international trade ceased and food was distributed optimally within each country11, such that the maximum number of people were given the calorie intake to maintain their weight and normal physical activity28, the percentage of population that could be supported can be calculated (Fig. 5b and Supplementary Fig. 6b). Under the 150 Tg case, most countries would have less than 25% of the population survive by the end of Year 2 (Supplementary Fig. 7). In 2020, 720 million–811 million people suffered from undernutrition worldwide30, despite food production being more than sufficient to nourish a larger world population. Thus, it is likely that food distribution would be inequitable both between and within countries. Fig. 5: Overview of global calorie intake and sensitivity to livestock and food waste assumptions. a, Global average change in calorie intake per person per day in Year 2 post-war under the Livestock case (yellow bars) and for the Partial Livestock case (red bars), assuming that all food and waste is evenly distributed. For the Partial Livestock case, additional calories potentially available by human consumption of animal feed, mainly maize and soybeans, are plotted for various portions of converted animal feed (pink tick marks), and the remaining livestock crop feed is used for raising livestock. Critical food intake levels are marked in the right margin. b, Without international trade, the global population (%) that could be supported, although underweight, by domestic food production at the end of Year 2 after a nuclear war if they receive the calories supporting their regular physical activity29 and the rest of the population would receive no food, under the Livestock and Partial Livestock cases. The blue line in b shows the percentage of population that can be supported by current food production when food production does not change but international trade is stopped. National data are calculated first (Supplementary Tables 4 and 5 and Supplementary Fig. 5) and then aggregated to global data. Using state-of-the-art climate, crop and fishery models, we calculate how the availability of food supplies could change globally under various nuclear war scenarios. We combine crops and marine fish and also consider whether livestock and animal products continue to be an important food source. For a regional nuclear war, large parts of the world may suffer famine—even given the compensating behaviours considered in this paper. Using crops fed to livestock as human food could offset food losses locally but would make limited impacts on the total amount of food available globally, especially with large atmospheric soot injections when the growth of feed crops and pastures would be severely impaired by the resulting climate perturbation. Reducing household food waste could help in the small nuclear war cases but not in the larger nuclear wars due to the large climate-driven reduction in overall production. We find particularly severe crop declines in major exporting countries such as Russia and the United States, which could easily trigger export restrictions and cause severe disruptions in import-dependent countries24. Our no-trade response illustrates this risk—showing that African and Middle Eastern countries would be severely affected. Our analysis of the potential impacts of nuclear war on the food system does not address some aspects of the problem, leaving them for future research. In all the responses, we do not consider reduced human populations due to direct or indirect mortality and possible reduced birth rate. The total number and composition of population changes would affect available labour, calorie production and distribution. Also, we do not consider farm-management adaptations such as changes in cultivar selection, switching to more cold-tolerating crops or greenhouses31 and alternative food sources such as mushrooms, seaweed, methane single cell protein, insects32, hydrogen single cell protein33 and cellulosic sugar34. Although farmer adaptation35 and alternative food sources could reduce the negative impact from a simulated nuclear war, it would be challenging to make all the shifts in time to affect food availability in Year 2, and further work should be done on these interventions. Current food storage can alleviate the shortage in Year 1 (ref. 14) but would have less impact on Year 2 unless it were rationed by governments or by the market. Expanding or shifting cropping land to favourable climate regions would increase crop production. Further studies on adaptation and the impacts on short-term food availability are needed, but those topics are beyond the scope of this study. Adaptation in fisheries is also not considered, such as changes in the use of discarded bycatch and offal in fisheries. These include reduced availability of fuel, fertilizer and infrastructure for food production after a war, the effects of elevated ultraviolet radiation36 on food production and radioactive contamination37. While this analysis focuses on calories, humans would also need proteins and micronutrients to survive the ensuing years of food deficiency (we estimate the impact on protein supply in Supplementary Fig. 3). Large-scale use of alternative foods, requiring little-to-no light to grow in a cold environment38, has not been considered but could be a lifesaving source of emergency food if such production systems were operational. In conclusion, the reduced light, global cooling and likely trade restrictions after nuclear wars would be a global catastrophe for food security. The negative impact of climate perturbations on the total crop production can generally not be offset by livestock and aquatic food (Fig. 5a). More than 2 billion people could die from a nuclear war between India and Pakistan, and more than 5 billion could die from a war between the United States and Russia (Table 1). The results here provide further support to the 1985 statement by US President Ronald Reagan and Soviet General Secretary Mikhail Gorbachev and restated by US President Joe Biden and Russian President Vladimir Putin in 2021 that 'a nuclear war cannot be won and must never be fought'. We use a state-of-the-art global climate model to calculate the climatic and biogeochemical changes caused by a range of stratospheric soot injections, each associated with a nuclear war scenario18 (Tables 1 and 2). Simulated changes in surface air temperature, precipitation and downward direct and diffuse solar radiation are used to force a state-of-the-art crop model to estimate how the productivity of the major crops (maize, rice, spring wheat and soybean) would be affected globally, and changes in oceanic net primary production and sea surface temperature are used to force a global marine fisheries model. We combine these results with assumptions about how other crop production, livestock production, fish production and food trade could change and calculate the amount of food that would be available for each country in the world after a nuclear war. Table 2 Changes in food calorie availability (%) in Year 2 after a nuclear war for the nations with nuclear weapons and global average assuming no trade after simulated nuclear wars under the Livestock case, the Partial Livestock case and the No Livestock case with 50% livestock feed to human consumption The simulated surface climate disruptions due to the nuclear war scenarios are summarized in Fig. 1. Averaged over the current crop regions, surface downwelling solar radiation reduces by 10 W m−2 (5 Tg soot injection) to 130 W m−2 (150 Tg soot injection). With less energy received, the maximum average 2 m air temperature reductions range from 1.5 °C (5 Tg soot injection) to 14.8 °C (150 Tg soot injection), peaking within 1–2 years after the war, with temperature reduction lasting for more than 10 years. The cooling also reduces precipitation over summer monsoon regions. Similar but smaller reductions of solar radiation and temperature are projected in marine regions (Fig. 1b,d), with resulting changes in lower trophic-level marine primary productivity. We applied local changes at every grid cell to the crop and fish models. Climate model All nuclear war scenarios9,18 are simulated using the Community Earth System Model (CESM)39. This model includes interactive atmosphere, land, ocean and sea ice. Both atmosphere and land have a horizontal resolution of 1.9° × 2.5°, and the ocean has a horizontal resolution of 1°. The atmospheric model is the Whole Atmosphere Community Climate Model version 4 (ref. 40). The land model is the Community Land Model version 4 with the carbon–nitrogen cycle. CESM output at 1- and 3-hour resolution, including 2 m air temperature, precipitation, specific humidity and downward longwave radiation and solar radiation (separated into direct and diffuse radiation), is used to drive the offline crop model simulations. There are three ensemble members of the control simulation, which repeats the climate forcing of 2000 for 15 years, three ensemble members of the 5 Tg case and one simulation for each other nuclear war scenario. In all the simulations, the soot is arbitrarily injected during the week starting on May 15 of Year 1. Our scenarios assume that all stored food is consumed in Year 1 and we present analysis of the remaining food in Year 2. If the war occurred at the end of a calendar year, there would still be food available in Year 2, so what we label Year 2 should be relabelled Year 3. However, since the severe climate and food impacts last for more than 5 years (Figs. 1 and 2), the same conclusions apply to a world after a nuclear war. Direct climate model output use Because climate models have biases, it is typical to bias correct model output before using it as input for crop models. There are various techniques that attempt to use past observational data to address changes in the mean and variance, but none are perfect and all are limited by assumptions that future relations between model output and crop model input can be based on the recent past. A common method14 is the delta method in which an observational reanalysis weather dataset is used and monthly means of temperature, precipitation and insolation are modified according to the climate model simulations. This comes with the advantage of realistic internal variability important for crop modelling12,13,14 but does not adjust changes in variance, which might be an unrealistic assumption under higher emission scenarios, such as the 150 Tg case. Here, because we are using a crop model that has already been calibrated with the same climate model that we are using, we use raw climate model output (1.9° × 2.5°) to force the crop model, and this allows variance to change too. Crop model Crop simulation uses the Community Land Model version 5 crop (CLM5crop)41,42,43 in the Community Earth System Model version 2 (CESM2). Dynamic vegetation is not turned on. CLM5crop has six active crops, maize, rice, soybeans, spring wheat, sugar cane and cotton, and also simulates natural vegetation, such as grasses. In this study, we used the output of the cereals (maize, rice, soybeans and spring wheat) and grasses. Although CLM5crop does not simulate winter wheat, we assume winter wheat production is changed by the same amount as spring wheat, which has been found in other studies14; however, this may underestimate the winter wheat response, because winter wheat would experience colder temperatures during its growing period that would be more likely to cross critical thresholds14. Surface ozone and downward ultraviolet radiation would also be impacted by nuclear war36, but CLM5crop is not able to consider those impacts, which might exacerbate the losses. In addition, the crop model does not consider the availability of pollinators, killing frost and alternative seeds. The model simulates rainfed crops and irrigated crops separately, and all results presented here refer to the total production of rainfed and irrigated crops. Irrigated crops are simulated under the assumption that freshwater availability is not limiting43. Although evaporation is reduced with cooling, it is possible that our result may underestimate the negative impact from precipitation reduction, especially for the large injection cases. CLM5crop was evaluated41 using FAO observations (average of 1991–2010), and it does a reasonable job of reproducing observed spatial pattern of maize, rice, soybean and spring wheat yield. Also, time series of crop yields simulated by CLM5crop compare with FAO data from 2006 to 2018, and CLM5crop reasonably represents global total production and average yields of maize, rice, soybean and spring wheat42. CLM5crop is spun up for 1,060 years by repeating the past 10 years of the CESM control to reach the equilibrium of four soil carbon pools. The crop simulations are at the same resolution as CESM simulations (1.9° × 2.5°). The crop planting date is determined by growing degree days, and the location of cropland is fixed for all crops. Fishery model Fish and fisheries responses are simulated with the BiOeconomic mArine Trophic Size-spectrum (BOATS) model15,44,45. BOATS was used to calculate the size-structured biomass of commercially targeted fish based on gridded (1° horizontal resolution) inputs of sea surface temperature and oceanic net primary production from CESM. The model also interactively simulates fishing effort and fish catch through a bioeconomic component that depends on fish price, cost of fishing, catchability and fisheries regulation15. Details are found in ref. 15 and references therein. Combining crop and marine fish data Supplementary Table 1 shows the total calorie reductions for each of the nine nuclear states from just the simulated crops and marine fish. Data for countries can be found in Supplementary Table 2. To calculate nation-level calories available from simulated crops and fish, we weight the production by the calorie content of each type of food. We use data from FAO23,24,46,47. Nation-level calorie reduction (%) from total production of maize, rice, soybean, wheat and marine fish is thus calculated as: $$w_\mathrm{iy} = \frac{{P_ic_iR_\mathrm{iy}}}{{\mathop {\sum}\nolimits_{i = 1}^5 {P_ic_iR_\mathrm{iy}} }}$$ $$R_y = \mathop {\sum}\limits_{i = 1}^5 {R_{iy}w_{iy}} ,$$ where index i is maize, rice, soybean, wheat or marine fish wild catch, wiy is the calorie weight of each commodity per country each year, Pi is the national production of item i in FAO-Food Balance Sheet (FBS)23,24, ci is calories per 100 g dry mass for each item23, Riy is national production reduction (%) of each item in year y after the nuclear wars and Ry is nation-averaged calorie reduction (%) of the five items in year y after the nuclear wars. Effects on other food types National averaged calorie reduction (%) of the four simulated crops is applied to the total calories of all crops in 2010 to estimate simulated nuclear war impacts on this category. Livestock and aquaculture We assume these two types of food share a similar response to simulated nuclear war as they involve feeding animals in a relatively controlled environment. For global calculations for livestock, we assume that 46% are fed by pasture and 54% are fed by crops and processed products48 and use national-level data26 to calculate reduction of livestock feed from pasture and crop-based products. We assume that livestock production is linearly correlated with the feed. Annual leaf carbon of grasses (both C3 and C4) is used to estimate pasture changes, and reduction of the four simulated crops is used for crop feed changes. For aquaculture, the feed is only from crops and processed products, and the production is also correlated with the amount of feed fish receive. Direct climate change impacts on livestock and fish are not considered. Inland fish capture is not considered in this study. Because inland fish contribute only 7% of total fish production46, adding inland fisheries would not change the main conclusions of this study. All food commodity trade calculations are based on the 2010 FAO Commodity Balance Sheet (FAO-CBS), FAO-FBS and processed data from previous studies24,27,28,47. This dataset provides the production and usage of each food and non-food agricultural product for each country and imports and exports and thus allows the calculation on a national basis of food usage and calorie availability. Domestic availability of a food in each country comes from domestic production and reserves, reduced by exports and increased by imports. We calculate no international trade by applying the ratio of domestic production and domestic supply to each food category and the food production in different usages: $$C_\mathrm{food - notrade} = C_\mathrm{food} \times \frac{{P_\mathrm{dp}}}{{P_\mathrm{ds}}}$$ where Cfood is national-level calorie supply from different food types26,46, Cfood-notrade is national-level calorie supply from different food types with the assumption of no international trade, Pdp is national-level domestic production for each type of food in FAO-CBS and Pds is national-level domestic supply for each type of food in FAO-CBS. Domestic supply is the available food on the market, including domestic production, export and import. Food usage of maize, soybean, rice and wheat is calculated from FAO-CBS. In FAO-CBS, maize products are maize and byproduct maize germ oil, soybean products are soybean and byproducts soybean oil and soybean cake, rice products are rice and byproduct rice bran oil and wheat product is wheat. Products for food purposes are the sum of food supply in each category and the processing product minus the total byproducts (the difference includes processing for the purpose of alcohol or sugar). Calorie calculations For the Livestock case, national-level available calories are calculated by $$\begin{array}{*{20}{l}} {C_{L}} \hfill & = \hfill & {C_\mathrm{plantbased} \times \left( {1 - R_\mathrm{cy}} \right) + C_\mathrm{livestock - ruminant} \times \left( {1 - R_\mathrm{grass}} \right)} \hfill \\ {} \hfill & {} \hfill & { + C_\mathrm{livestock - monogastric} \times \left( {1 - R_\mathrm{cy}} \right) + C_\mathrm{livestock - monogastric}}\\&& \times R_\mathrm{grass} \times \left( {1 - R_\mathrm{cy}} \right) \times \frac{{F_\mathrm{ruminant - cropfeed}}}{{F_\mathrm{monogastric - cropfeed}}} \hfill \\ {} \hfill & {} \hfill & { + C_\mathrm{aquaculture} \times \left( {1 - R_\mathrm{cy}} \right) + C_\mathrm{marine - catch} \times \left( {1 - R_{\mathrm{marine - catch} - y}} \right)} \hfill \\ {} \hfill & {} \hfill & { + \left( {1 - R_\mathrm{cy}} \right) \times C_\mathrm{plantbased} \times \frac{{f_\mathrm{final - product - biofuel}}}{{f_\mathrm{food}}}} \hfill \end{array}$$ where CL is calories available in each nation L (kcal per capita per day) under the Livestock case, Cplantbased, Clivestock-ruminant and Clivestock-monogastric are calories available from plant-based products, ruminants and monogastrics27 and Caquaculture and Cmarine-catch are calculated by calorie availability from fish27 multiplied by the ratio of aquaculture and catch46. Rgrass is grass production change, and Rmarine-catch-y is marine capture change. Fruminant-cropfeed is the fraction of crop feed for ruminant, and Fmonogastric-cropfeed is the fraction of crop feed for monogastrics26. Rcy is crop production change calculated as: $$w_\mathrm{iy} = \frac{{P_ic_iR_\mathrm{iy}}}{{\mathop {\sum}\nolimits_{i = 1}^4 {P_ic_{iR_\mathrm{iy}}} }},$$ $$R_\mathrm{cy} = \mathop {\sum}\limits_{i = 1}^4 {R_\mathrm{iy}w_\mathrm{iy}}$$ where index i is maize, rice, soybean or wheat, wiy is the calorie weight of each commodity per country each year, Pi is the national production of item i in FAO-CBS47, ci is calories per 100 g retail weight for each item23 and Riy is national production change (%) of each item in year y after the nuclear wars. \(f_\mathrm{final - product - biofuel}\) is the fraction of final product of biofuel in plant-based product, and ffood is the fraction of food in plant-based product. For the No Livestock case, national-level available calories are calculated by $$\begin{array}{*{20}{l}} {C_\mathrm{NL}} \hfill & = \hfill & {C_\mathrm{plantbased} \times \left( {1 - R_\mathrm{cy}} \right) + C_\mathrm{marine - catch} \times \left( {1 - R_{\mathrm{marine - catch} - y}} \right)} \hfill \\ {} \hfill & {} \hfill & { + C_\mathrm{plantbased} \times f_\mathrm{feed - to - food} \times \left( {1 - R_\mathrm{cy}} \right) \times p_\mathrm{feed - for - human}} \hfill \\ {} \hfill & {} \hfill & { + \left( {1 - R_\mathrm{cy}} \right) \times C_\mathrm{plantbased} \times \frac{{f_\mathrm{final - product - biofuel}}}{{f_\mathrm{food}}}} \hfill \end{array}$$ CNL is national-level available calories in the No Livestock case. ffeed-to-food is the fraction of food crops that are used as feed relative to their usage as food, calculated based on their calorie content26. pfeed-for-human is the percentage of livestock grain feed used for human consumption. We tested 0%, 20%, 40%, 50%, 60%, 80% and 100% and used 50% for Table 2 and Fig. 4. For the Partial Livestock case, national-level available calories are calculated by $$\begin{array}{*{20}{l}} {C_\mathrm{PL}} \hfill & = \hfill & {C_\mathrm{plantbased} \times \left( {1 - R_\mathrm{cy}} \right) + C_\mathrm{marine - catch} \times \left( {1 - R_{\mathrm{marine - catch} - y}} \right)} \hfill \\ {} \hfill & {} \hfill & { + C_\mathrm{plantbased} \times f_\mathrm{feed - to - food} \times \left( {1 - R_\mathrm{cy}} \right) \times p_\mathrm{feed - for - human}} \hfill \\ {} \hfill & {} \hfill & { + (1 - p_\mathrm{feed - for - human}) \times \left( {C_\mathrm{livestock - ruminant} \times \left( {1 - R_\mathrm{grass}} \right)} \right.} \hfill \\ {} \hfill & {} \hfill & { + C_\mathrm{livestock - monogastric} \times \left( {1 - R_\mathrm{cy}} \right)} \hfill \\ {} \hfill & {} \hfill & {\left. { + C_\mathrm{livestock - monogastric} \times R_\mathrm{grass} \times \left( {1 - R_\mathrm{cy}} \right) \times \frac{{F_\mathrm{ruminant - cropfeed}}}{{F_\mathrm{monogastric - cropfeed}}}} \right)} \hfill \\ {} \hfill & {} \hfill & { + \left( {1 - R_\mathrm{cy}} \right) \times C_\mathrm{plantbased} \times \frac{{f_\mathrm{final - product - biofuel}}}{{f_\mathrm{food}}}} \hfill \end{array}$$ CPL is national-level available calorie in Partial Livestock case. On the basis of the assumed percentage of livestock crop feed to convert to human consumption, instead of wasting the remaining portion of livestock crop feed as in No Livestock case, here we use the remaining livestock crop feed to raise livestock. The percentage of national household waste is calculated by $$P_\mathrm{waste} = 100\% \times \frac{{C_\mathrm{available} - C_{{\mathop{{{\rm{intake}}}}}}}}{{C_\mathrm{available}}}$$ Pwaste is the percentage of national household waste of food calorie availability in 2010, Cavailable is the food calorie availability per day per person in each country and Cintake is the national calorie intake per day per person27. Calorie requirements The population percentage supported by available calories calculated for the Livestock, Partial Livestock and No Livestock responses indicates the macro-level consequences for food security (Fig. 4). The current average human available calorie supply is 2,855 kcal per capita per day, including food intake and food waste (Fig. 3). Calorie requirements vary significantly with age, gender, size, climate, level of activity and underlying medical conditions. Ref. 27 estimated the national-level calorie availability, calorie intake, calorie from plant-based product, livestock and fish and also calculated the calorie intake of an underweight population with current physical activity of an underweight population with sedentary physical activity and calorie intake lower than the basal metabolic rate. We assume that the calorie intake of an underweight population with current physical activity is needed to support life and regular labour activity. Uncertainties This work was done with one Earth system model, with only one ensemble member for all the cases with soot injections >5 Tg, only one crop model and only one fishery model. For the 5 Tg case and the control, there are three ensemble members, but only the ensemble averages are used. The three ensemble members for the 5 Tg case are very similar (Supplementary Fig. 8), so climate variability for the larger forcings would be much smaller than the signal. CESM is a state-of-the-art climate model, and its simulations of the impacts of nuclear war have been almost identical to simulations with other models for the 5 Tg (refs. 49,50) and 150 Tg (ref. 9) cases. However, further developments in climate models, such as including organic carbon in fire emissions, and better simulating aerosol growth and interactions with the surrounding environment, may improve climate prediction after a nuclear war. CLM5crop and BOATS are also state-of-the-art models, but future simulations with different models would certainly be useful. CLM5crop compares well with other crop models in response to nuclear war forcing14 (Supplementary Fig. 2). If anything, CLM5crop underestimates the crop response to nuclear war (Fig. 2 and Supplementary Fig. 2). Because most crop models were developed for the current or warmer climates, further research is needed to understand how crops react to a suddenly cold environment. Our study is the first step to reveal national food security after nuclear wars, but crops may not respond uniformly to the same forcing in each nation, given different farming practices. In addition, multi-model assessment will be essential to fully investigate this problem, and crop model developments are important to understand impacts from surface ozone, ultraviolet radiation and freshwater availability. Furthermore, local radioactive contamination and climate change from nuclear war would impact the insect community. The influence on pests, pollinators and other insects is unclear, and hence further studies are needed. Some assumptions in this study could be examined in future work. For example, to turn off international trade, the ratio of local production to domestic supply is applied on a national level. Also, to calculate national calorie intake after nuclear wars, we assume that food is evenly distributed in each country. Economic models will be necessary to further understand the contributions of trade and local food distribution systems to human calorie intake after nuclear wars. This study uses calorie intake from ref. 27, and food loss from harvesting is not considered. If human behaviour and the food industry would change substantially, this would affect our conclusions. Data of crop yield, grass production, national livestock feed, national calorie and national plant product usage are available at https://osf.io/YRBSE/. Additional data that support the findings of this study are available from the corresponding author upon request. Code availability The source code for the CESM(WACCM) model used in this study is freely available at https://www.cesm.ucar.edu/working_groups/Whole-Atmosphere/code-release.html, and the code for CLM5 is available at https://www.cesm.ucar.edu/models/cesm2/land/. Oman, L., Robock, A., Stenchikov, G. L. & Thordarson, T. High-latitude eruptions cast shadow over the African monsoon and the flow of the Nile. Geophys. Res. Lett. 33, L18711 (2006). Article ADS Google Scholar Wood, G. D. Tambora: The Eruption That Changed The World (Princeton Univ. Press, 2014). Book Google Scholar Stommel, H. & Stommel, E. Volcano Weather; The Story of 1816, The Year Without a Summer (Seven Seas Press, 1983). Turco, R. P., Toon, O. B., Ackerman, T. P., Pollack, J. B. & Sagan, C. Nuclear winter: global consequences of multiple nuclear explosions. Science 222, 1283–1292 (1983). Article ADS CAS Google Scholar Aleksandrov, V. V. & Stenchikov, G. L. On the modeling of the climatic consequences of the nuclear war. Proc. Applied Math (Computing Centre of the USSR Academy of Sciences, 1983). Robock, A. Snow and ice feedbacks prolong effects of nuclear winter. Nature 310, 667–670 (1984). Robock, A. et al. Climatic consequences of regional nuclear conflicts. Atm. Chem. Phys. 7, 2003–2012 (2007). Robock, A., Oman, L. & Stenchikov, G. L. Nuclear winter revisited with a modern climate model and current nuclear arsenals: still catastrophic consequences. J. Geophys. Res. 112, D13107 (2007). ADS Google Scholar Coupe, J., Bardeen, C. G., Robock, A. & Toon, O. B. Nuclear winter responses to global nuclear war in the Whole Atmosphere Community Climate Model Version 4 and the Goddard Institute for Space Studies ModelE. J. Geophys. Res. Atmos. 124, 8522–8543 (2019). Harwell, M. A. & Cropper Jr., W. P. in Environmental Consequences of Nuclear War, SCOPE 28, Volume II Ecological and Agricultural Effects, 2nd Edn (eds Harwell, M. A. & Hutchinson, T. C.) Ch. 4 (Wiley, 1989). Cropper Jr., W. P. & Harwell, M. A. in Environmental Consequences of Nuclear War, SCOPE 28, Volume II Ecological and Agricultural Effects, 2nd Edn (eds Harwell, M. A. & Hutchinson, T. C.) Ch. 5 (Wiley, 1989). Xia, L. & Robock, A. Impacts of a nuclear war in South Asia on rice production in mainland China. Climatic Change 116, 357–372 (2013). Özdoğan, M., Robock, A. & Kucharik, C. Impacts of a nuclear war in South Asia on soybean and maize production in the Midwest United States. Climatic Change 116, 373–387 (2013). Jägermeyr, J. et al. A regional nuclear conflict would compromise global food security. Proc. Natl Acad. Sci. USA 117, 7071–7081 (2020). Scherrer, K. J. N. et al. Marine wild-capture fisheries after nuclear war. Proc. Natl Acad. Sci. USA 117, 29748–29758 (2020). Toon, O. B. et al. Atmospheric effects and societal consequences of regional scale nuclear conflicts and acts of individual nuclear terrorism. Atm. Chem. Phys. 7, 1973–2002 (2007). Robock, A. & Zambri, B. Did smoke from city fires in World War II cause global cooling. J. Geophys. Res. Atmos. 123, 10314–10325 (2018). Toon, O. B. et al. Rapid expansion of nuclear arsenals by Pakistan and India portends regional and global catastrophe. Sci. Adv. 5, eaay5478 (2019). Yu, P. et al. Black carbon lofts wildfire smoke high into the stratosphere to form a persistent plume. Science 365, 587–590 (2019). Yu, P. et al. Persistent stratospheric warming due to 2019–2020 Australian wildfire smoke. Geophys. Res. Lett. 48, e2021GL092609 (2021). Peterson, D. A. et al. Australia's Black Summer pyrocumulonimbus super outbreak reveals potential for increasingly extreme stratospheric smoke events. npj Clim. Atmos. Sci. 4, 38 (2021). Ambio Advisory Group, Reference scenarios: How a nuclear war might be fought, in Nuclear War: The Aftermath (ed Peterson, J.) Ch. 3 (Pergamon Press, 1983). Food Composition Tables (FAO, 2022); http://www.fao.org/3/X9892E/X9892e05.htm Food Balances (FAO, 2022); http://www.fao.org/faostat/en/#data/FBSH Falkendal, T. et al. Grain export restrictions during COVID-19 risk food insecurity in many low- and middle-income countries. Nat. Food 2, 11–14 (2021). Weindl, I. et al. Livestock and human use of land: productivity trends and dietary choices as drivers of future land and carbon dynamics. Glob. Environ. Change 47, 121–132 (2017). Bodirsky, B. et al. mrcommons: MadRat Commons Input Data Library. R version 1.9.3 https://github.com/pik-piam/mrcommons (2022). Bodirsky, B. L. et al. The ongoing nutrition transition thwarts long-term targets for food security, public health and environmental protection. Sci. Rep. 10, 19778 (2020). National Research Council, Energy, in Recommended Dietary Allowances, 10th Edn Ch. 3 (US National Academies Press, 1989). FAO, IFAD, UNICEF, WFP and WHO The State of Food Security and Nutrition in the World 2021 (FAO, 2021). Alvarado, K. A., Mill, A., Pearce, J. M., Vocaet, A. & Denkenberger, D. Scaling of greenhouse crop production in low sunlight scenarios. Sci. Total Environ. https://doi.org/10.1016/j.scitotenv.2019.136012 (2019). Denkenberger, D., Pearce, J. M. Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe (Academic Press, 2014). Martínez, J. B. G. et al. Potential of microbial protein from hydrogen for preventing mass starvation in catastrophic scenarios. Sustain Prod. Consum. 25, 234–247 (2021). Throup, J. et al. Rapid repurposing of pulp and paper mills, biorefineries, and breweries for lignocellulosic sugar production in global food catastrophes. Food Bioprod. Process. 131, 22–39 (2022). Hochman, G. et al. Economic incentives modify agricultural impacts of nuclear war. Env. Res. Lett. 17, 054003 (2022). Bardeen, C. G. et al. Extreme ozone loss following nuclear war resulting in enhanced surface ultraviolet radiation. J. Geophys. Res. Atmos. 126, e2021JD035079 (2021). Grover, H. D. & Harwell, M. A. Biological effects of nuclear war II: impact on the biosphere. Bioscience 35, 576–583 (1985). Denkenberger, D. C. & Pearce, J. M. Feeding everyone: solving the food crisis in event of global catastrophes that kill crops or obscure the sun. Futures 72, 57–68 (2015). Hurrell, J. W. et al. The Community Earth System Model: a framework for collaborative research. Bull. Am. Meteorol. Soc. 94, 1339–1360 (2013). Marsh, D. R. et al. Climate change from 1850 to 2005 simulated in CESM1(WACCM). J. Clim. 26, 7372–7390 (2013). Lombardozzi, D. L. et al. Simulating agriculture in the Community Land Model Version 5. J. Geophys. Res. Biogeosci. 125, e2019JG005529 (2020). Fan, Y. et al. Solar geoengineering can alleviate climate change pressure on crop yields. Nat. Food 2, 373–381 (2021). Lawrence, D. M. et al. The Community Land Model version 5: description of new features, benchmarking, and impact of forcing uncertainty. J. Adv. Modeling Earth Systems 11, 4245–4287 (2019). Carozza, D. A., Bianchi, D. & Galbraith, E. D. Formulation, general features and global calibration of a bioenergetically-constrained fishery model. PLoS ONE 12, e0169763 (2017). Carozza, D. A., Bianchi, D. & Galbraith, E. D. The ecological module of BOATS-1.0: a bioenergetically constrained model of marine upper trophic levels suitable for studies of fisheries and ocean biogeochemistry. Geosci. Model Dev. 9, 1545–1565 (2016). The State of World Fisheries and Aquaculture 2020. Sustainability in Action (FAO, 2020); https://doi.org/10.4060/ca9229en Commodity Balances (FAO, 2022); https://www.fao.org/faostat/en/#data/CB Mottet, A. et al. Livestock: on our plates or eating at our table? A new analysis of the feed/food debate. Glob. Food Sec.14, 1–8 (2017). Mills, M. J., Toon, O. B., Lee-Taylor, J. & Robock, A. Multi-decadal global cooling and unprecedented ozone loss following a regional nuclear conflict. Earth's Future 2, 161–176 (2014). Stenke, A. et al. Climate and chemistry effects of a regional scale nuclear conflict. Atmos. Chem. Phys. 13, 9713–9729 (2013). Toon, O. B., Robock, A. & Turco, R. P. Environmental consequences of nuclear war. Phys. Today 61, 37–42 (2008). This study was supported by the Open Philanthropy Project, with partial support from the European Research Council under the European Union's Horizon 2020 Research and Innovation Programme under grant agreement 682602. A.R. and L.X. were supported by National Science Foundation grants AGS-2017113 and ENG-2028541. K.S. was supported by the European Research Council under the European Union's Horizon 2020 Research and Innovation Programme under grant agreement 682602 and Research Council of Norway project 326896. C.S.H., C.G.B. and O.B.T. were supported by the Open Philanthropy Project. J.J. was supported by the NASA GISS Climate Impacts Group and the Open Philanthropy Project. B.L.B. has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement number 776479 (COACCH) and 821010 (CASCADES). I.W. and B.L.B. have received funding from the German Federal Ministry of Education and Research (BMBF) in the context of the project 'FOCUS–Food security and sustained coastal livelihoods through linking land and ocean' (031B0787B). R.H. was supported by the European Research Council under the European Union's Horizon 2020 Research and Innovation Programme under grant agreement 682602. We thank I. Helfand for valuable suggestions on the work and D. Lombardozzi for supporting CLM5crop simulations. Department of Environmental Sciences, Rutgers University, New Brunswick, NJ, USA Lili Xia & Alan Robock Institut de Ciència i Tecnologia Ambientals, Universitat Autònoma de Barcelona, Cerdanyola del Vallès, Spain Kim Scherrer Department of Biological Sciences, University of Bergen, Bergen, Norway Department of Ocean and Coastal Science, Center for Computation and Technology, Louisiana State University, Baton Rouge, LA, USA Cheryl S. Harrison Potsdam Institute for Climate Impact Research, Potsdam, Germany Benjamin Leon Bodirsky, Isabelle Weindl & Jonas Jägermeyr World Vegetable Center, Tainan, Taiwan Benjamin Leon Bodirsky NASA Goddard Institute for Space Studies, New York, NY, USA Jonas Jägermeyr Center for Climate Systems Research, Columbia University, New York, NY, USA National Center for Atmospheric Research, Boulder, CO, USA Charles G. Bardeen Laboratory for Atmospheric and Space Physics, Department of Atmospheric and Ocean Sciences, University of Colorado, Boulder, CO, USA Owen B. Toon School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia Ryan Heneghan Lili Xia Alan Robock Isabelle Weindl L.X., A.R., K.S. and C.S.H. designed the study. C.G.B. conducted the climate model simulations, L.X. conducted the crop simulations and K.S. and R.H. conducted the fishery simulations. J.J. provided GGCMI crop model simulations. B.L.B. and I.W. provided national livestock feed and national calorie intake data. L.X. analysed the data with contributions from all the authors. A.R. and L.X. wrote the first draft, and all authors contributed to editing and revising the manuscript. Correspondence to Lili Xia. The authors declare no competing interests. Nature Food thanks Deepak Ray, Ertharin Cousin, Michal Smetana and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Supplementary Figs. 1–8 and Tables 1–9. Reporting Summary. Xia, L., Robock, A., Scherrer, K. et al. Global food insecurity and famine from reduced crop, marine fishery and livestock production due to climate disruption from nuclear war soot injection. Nat Food 3, 586–596 (2022). https://doi.org/10.1038/s43016-022-00573-0 Issue Date: August 2022 Even a small nuclear war threatens food security Deepak K. Ray Nature Food (2022) Radionuclide remediation research is crucial to food security Hart I. E. Rapaport Nuclear war between two nations could spark global famine Alexandra Witze Nature Food News & Views 15 Aug 2022 Nature Food (Nat Food) ISSN 2662-1355 (online)
CommonCrawl
Pair-dense relation algebras by Roger D. Maddux PDF Trans. Amer. Math. Soc. 328 (1991), 83-131 Request permission The central result of this paper is that every pair-dense relation algebra is completely representable. A relation algebra is said to be pair-dense if every nonzero element below the identity contains a "pair". A pair is the relation algebraic analogue of a relation of the form $\{ \langle {a,a} \rangle ,\langle {b,b} \rangle \}$ (with $a= b$ allowed). In a simple pair-dense relation algebra, every pair is either a "point" (an algebraic analogue of $\{ \langle {a,a} \rangle \}$) or a "twin" (a pair which contains no point). In fact, every simple pair-dense relation algebra $\mathfrak {A}$ is completely representable over a set $U$ iff $|U|= \kappa + 2\lambda$, where $\kappa$ is the number of points of $\mathfrak {A}$ and $\lambda$ is the number of twins of $\mathfrak {A}$. A relation algebra is said to be point-dense if every nonzero element below the identity contains a point. In a point-dense relation algebra every pair is a point, so a simple point-dense relation algebra $\mathfrak {A}$ is completely representable over $U$ iff $|U|= \kappa$, where $\kappa$ is the number of points of $\mathfrak {A}$. This last result actually holds for semiassociative relation algebras, a class of algebras strictly containing the class of relation algebras. It follows that the relation algebra of all binary relations on a set $U$ may be characterized as a simple complete point-dense semiassociative relation algebra whose set of points has the same cardinality as $U$. Semiassociative relation algebras may not be associative, so the equation $(x;y);z= x;(y;z)$ may fail, but it does hold if any one of $x,y$, or $z$ is $1$. In fact, any rearrangement of parentheses is possible in a term of the form ${x_0}; \ldots ;{x_{\alpha - 1}}$, in case one of the ${x_\kappa }{\text {'s}}$ is $1$. This result is proved in a general setting for a special class of groupoids. Louise H. Chin and Alfred Tarski, Distributive and modular laws in the arithmetic of relation algebras, Univ. California Publ. Math. (N.S.) 1 (1951), 341–384. MR 43763 Augustus De Morgan, On the symbols of logic, the theory of the syllogism, and in particular of the copula, and the application of the theory of probabilities to some questions in the theory of evidence, Trans. Cambridge Philos. Soc. 9 (1856), 79-127. —, On the syllogism, no. IV, and on the logic of relations, Trans. Cambridge Philos. Soc. 10 (1864), 331-358. Augustus De Morgan, "On the syllogism� and other logical writings, Yale University Press, New Haven, Conn., 1966. Edited, with an introduction, by Peter Heath. MR 0230587 Leon Henkin, J. Donald Monk, and Alfred Tarski, Cylindric algebras. Part I, Studies in Logic and the Foundations of Mathematics, vol. 64, North-Holland Publishing Co., Amsterdam, 1985. With an introductory chapter: General theory of algebras; Reprint of the 1971 original. MR 781929 Bjarni Jónsson, Varieties of relation algebras, Algebra Universalis 15 (1982), no. 3, 273–298. MR 689767, DOI 10.1007/BF02483728 Bjarni Jónsson and Alfred Tarski, Representation problems for relation algebras, Abstract 89, Bull. Amer. Math. Soc. 54 (1948), pp. 80 and 1192. Bjarni Jónsson and Alfred Tarski, Boolean algebras with operators. I, Amer. J. Math. 73 (1951), 891–939. MR 44502, DOI 10.2307/2372123 Bjarni Jónsson and Alfred Tarski, Boolean algebras with operators. II, Amer. J. Math. 74 (1952), 127–162. MR 45086, DOI 10.2307/2372074 John L. Kelley, General topology, D. Van Nostrand Co., Inc., Toronto-New York-London, 1955. MR 0070144 Roger C. Lyndon, The representation of relational algebras, Ann. of Math. (2) 51 (1950), 707–729. MR 37278, DOI 10.2307/1969375 Roger C. Lyndon, The representation of relation algebras. II, Ann. of Math. (2) 63 (1956), 294–307. MR 79570, DOI 10.2307/1969611 R. C. Lyndon, Relation algebras and projective geometries, Michigan Math. J. 8 (1961), 21–28. MR 122743 Roger Maddux, Some sufficient conditions for the representability of relation algebras, Algebra Universalis 8 (1978), no. 2, 162–172. MR 460210, DOI 10.1007/BF02485385 —, Topics in relation algebras, Doctoral dissertation, Univ. of California, Berkeley, 1978, pp. iii+241. Roger Maddux, The equational theory of $\textrm {CA}_{3}$ is undecidable, J. Symbolic Logic 45 (1980), no. 2, 311–316. MR 569401, DOI 10.2307/2273191 Roger Maddux, Some varieties containing relation algebras, Trans. Amer. Math. Soc. 272 (1982), no. 2, 501–526. MR 662049, DOI 10.1090/S0002-9947-1982-0662049-7 Roger Maddux, A sequent calculus for relation algebras, Ann. Pure Appl. Logic 25 (1983), no. 1, 73–101. MR 722170, DOI 10.1016/0168-0072(83)90055-6 Roger D. Maddux, Nonfinite axiomatizability results for cylindric and relation algebras, J. Symbolic Logic 54 (1989), no. 3, 951–974. MR 1011183, DOI 10.2307/2274756 Roger D. Maddux and Alfred Tarski, A sufficient condition for the representability of relation algebras, Notices Amer. Math. Soc. 23 (1976), A-447. Elliott Mendelson, Introduction to mathematical logic, 3rd ed., The Wadsworth & Brooks/Cole Mathematics Series, Wadsworth & Brooks/Cole Advanced Books & Software, Monterey, CA, 1987. MR 874751, DOI 10.1007/978-1-4615-7288-6 Donald Monk, On representable relation algebras, Michigan Math. J. 11 (1964), 207–210. MR 172797 J. Donald Monk, Completions of Boolean algebras with operators, Math. Nachr. 46 (1970), 47–55. MR 277369, DOI 10.1002/mana.19700460105 Istvan Németi, Logic with $3$ variables has Gödel's incompleteness property—thus free cylindric algebras are not atomic, Ann. Pure Appl. Logic (submitted). —, Free algebras and decidability in algebraic logic, Doctoral dissertation, Hungarian Academy of Sciences, Budapest, 1986. Charles Sanders Peirce, Collected papers, The Belknap Press of Harvard University Press, Cambridge, Mass., 1960. Edited by Charles Hartshorne and Paul Weiss. 6 vols. I: Principles of philosophy. II: Elements of logic. III: Exact logic. IV: The simplest mathematics. V: Pragmatism and pragmaticism. VI: Scientific metaphysics. MR 0110632 F. W. K. Ernst Schröder, Vorlesungen über die Algebra der Logik (exakte Logik), Vol. III, Algebra und Logik der Relative, Part I, Leipzig, 1895, pp. viii+649. Reprint by Chelsea, Bronx, 1966. Gunter Schmidt and Thomas Ströhlein, Relation algebras: concept of points and representability, Discrete Math. 54 (1985), 83-92. Alfred Tarski, On the calculus of relations, J. Symbolic Logic 6 (1941), 73–89. MR 5280, DOI 10.2307/2268577 —, Some metalogical results concerning the calculus of relations, J. Symbolic Logic 18 (1953), 188-189. —, A formalization of set theory without variables, J. Symbolic Logic 18 (1953), 189. Alfred Tarski, Contributions to the theory of models. III, Nederl. Akad. Wetensch. Proc. Ser. A. 58 (1955), 56–64 = Indagationes Math. 17, 56–64 (1955). MR 0066303 Alfred Tarski and Steven Givant, A formalization of set theory without variables, American Mathematical Society Colloquium Publications, vol. 41, American Mathematical Society, Providence, RI, 1987. MR 920815, DOI 10.1090/coll/041 Alfred North Whitehead and Bertrand Russell, Principia mathematica, Vol. I, Cambridge Univ. Press, 1910, pp. xv+666. Retrieve articles in Transactions of the American Mathematical Society with MSC: 03G15, 08B99, 68Q99 Retrieve articles in all journals with MSC: 03G15, 08B99, 68Q99 Journal: Trans. Amer. Math. Soc. 328 (1991), 83-131 MSC: Primary 03G15; Secondary 08B99, 68Q99
CommonCrawl
Replacing Jupiter with a brown dwarf? This is a purely hypothetical question but I can't find a satisfactory answer to it. Let's say somehow Jupiter collects enough mass to be considered a brown dwarf. Let's assume Jupiter achieves a maximum of 75 Jupiter Mass, which will be large brown dwarf in the solar system. What would happen to Earth if this were to happen? I mean more in terms of Earth's orbit and radiation output. Would life still exist on Earth? Further, what would the solar system even look like if Juptier were to become a brown dwarf? Would the planets and Jupiter still orbit around the Sun? Or would sheer mass of Jupiter catapult some planets out of the solar system altogether? science-based planets hard-science astronomy orbital-mechanics tempestwing0101 tempestwing0101tempestwing0101 This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information. $\begingroup$ If Jupiter gains 0.075 Solar Mass (from current 0.001), that would mean huge changes for all orbits inside Solar system. Earth will shift to a different orbit, although the process will take millions of years to stabilize. Asteroid and even planetary crashes will be possible. $\endgroup$ – Alexander Feb 21 '18 at 18:22 $\begingroup$ Check this and this. Both are from members of World Building :) $\endgroup$ – Renan Feb 21 '18 at 18:42 $\begingroup$ You can load up this scenario in Universe Sandbox and find out! $\endgroup$ – Schwern Feb 21 '18 at 19:02 $\begingroup$ I am not sure if this is the right question. Can Jupiter BECOME 75 Jupiter Mass without taking the mass from the rest of the solar system? It would seem that this would have to occur over time, and it would be the time factor that is crucial. However, if the question were a generic 'What if Jupiter was formed with 75 Jupiter mass?'', the question would be more appropriate. In other words, the process of Jupiter GAINING 75 Jupiter Mass would have a greater effect than Jupiter actually HAVING 75 Jupiter Mass. $\endgroup$ – Justin Thyme Feb 21 '18 at 19:36 $\begingroup$ The effects would be significant. At Jupiter's current mass, it's massive enough that it orbits a point just outside the sun, and the sun orbits the same point. Make it 75 times more massive and that point gets sucked noticeably closer toward Jupiter. That would have significant effects on the orbits of the other planets, especially over geologic timescales. $\endgroup$ – Jim MacKenzie Feb 21 '18 at 22:37 This is the formula for how much force gravity exerts between any two masses in space: $$ G \frac{m_{1} m_{2}}{r^2} $$ G is a constant: $$ G = 6.674×10^{-11} N (\frac{m}{kg})^2 $$ m1 and m2 are the masses involved r is the distance between the masses Let's calculate how strongly Jupiter attracts the Earth. We need some background first: Distance on closest approach ~= 5.88 x 1011 meters Earth's mass ~= 6 x 1024 kg Jupiter's current mass ~= 1.9 x 1027 kg At their closest approach, for all practical purposes, Jupiter attracts the Earth with a force of... $$ 6.674×10^{-11}N(\frac{m}{kg})^2×\frac{(6×10^{24}kg)×(1.9×10^{27}kg)}{(5.88×10^{11}m)^2} ~= aprox. 2.2×10^{18}N $$ 2.2x1018 Newtons may seems like a heck of a force, but it is only enough to accelerate the Earth at a rate of 3.6676x10-7 meters per second towards Jupiter. That is close to a tenth of a millionth of a meter per second. By the time any significant pull is done, the Earth will have gone further away from Jupiter, lessening the pull. Now let's run the same calculation with 75 Jupiter masses: $$ 6.674×10^{-11}N(\frac{m}{kg})^2×\frac{(6×10^{24}kg)×(1.425×10^{29}kg)}{(5.88×10^{11}m)^2} ~= aprox. 1.65×10^{20}N $$ That is enough to accellerate the Earth towards Jupiter at 0.0000275 meters per second. It is almost the same pull that the Moon has on Earth. Running the same equation for the pull between the Earth and the Moon (mass = 7.34x1022 kg, distance 384.400 km): $$ 6.674×10^{-11}N(\frac{m}{kg})^2×\frac{(6×10^{24}kg)×(7.34×10^{22}kg)}{(3.844×10^8m)^2} ~= aprox. 1.989×10^{20}N $$ Which is comparable to the previous calculation. However, since Jupiter is much farther away, the difference in the forces it would exert on the near and far sides of Earth would be very small: varying the distance by six thousand kilometers more or less in the formula above gives a variation in newtons within the 12th negative power of ten. Not enough to cause tides (contrary to what I said in a previous version of this post). Saturn's closest approach distance to Jupiter is very close to Earth's closest approach distance. Saturn's mass is close to a hundred Earth masses, so the pull between Saturn and brown dwarf Jupiter would be around 100x the pull between brown dwarf Jupiter and Earth. Not enough to fling Saturn out of its orbit... Maybe some rings would be rearranged. Other bodies in the solar system would be similarly affected. Perturbations in the asteroid belt could fling some towards the sun over millenia, which could put us at risk, but we shouldn't have much cause for immediate worry. RenanRenan $\begingroup$ Re tides: it's not the gravitational force that matters, it's the gradient (i.e. how different is that pull on far and near sides of earth). $\endgroup$ – frodoskywalker Apr 2 '18 at 18:55 $\begingroup$ @frodoskywalker you're right. I am going to run some numbers. $\endgroup$ – Renan Apr 2 '18 at 18:58 $\begingroup$ @frodoskywalker I have corrected the post, many thanks :) $\endgroup$ – Renan Apr 2 '18 at 19:07 First, I am going to assume that the question is, "What would have happened to Earth if, during the formation of the Solar System, Jupiter had grown to be a large brown dwarf rather than a moderate-sized gas giant." This is a difficult question for two reasons. First, while we've made excellent progress in understanding the formation of planets, we still have a lot to learn. Secondly, the process is usually chaotic in the technical sense, where small changes at early times can produce arbitrarily large changes in the final results. So one point right off the bat: Possible outcomes include the Earth and other small planets being ejected from the solar system or impacting the Sun or UberJup. Based on what I recall from reports on detailed modelling, I'm pretty sure that most of the asteroid belt would be ejected and possibly Mars as well. There a paper at http://iopscience.iop.org/article/10.1086/300695/fulltext/ which presents simulations of planets in various binary system to determine orbital stability. (Note that it doesn't actually look at the brown dwarf case, but the lowest mass case it does consider gives us hints. Also, note that it only looks at 10,000 "years", and that we know from other work that instabilities can happen much later.) However, given those caveats it looks like stable orbits can exist inside about half UberJup's distance from the Sun. Since UberJup is at about 5 AU, we can reasonably expect all of the inner planets (including Mars at about 1.5 AU) to have stable orbits. This is for the case of a low-eccentricity orbit for UberJup, since a higher eccentricity makes the inner planets less stable. Bottom line so far: The inner plants could survive in stable orbits, but there is a non-trivial chance they wouldn't. There's a lot of cutting-edge work on planetary migration, which I've not considered and which probably decreases Earth's survival chances. Basically, once you're not looking at ultra-close approaches -- cosmic billiards -- resonance effects become important. Resonance effects don't even require huge masses. Basically, if, say, Earth and Venus had orbits whose periods were in a small integer ratio: 2:3, 1:2, 3:4, etc., they occupy the same relative positions again and again and again and even very small gravitational effects can build up and, slowly over time, planets can exchange surprisingly large amounts of energy and momentum. It appears that in the actual history of the Solar System, Jupiter and Saturn did just that and moved first in to perhaps half their current orbital distances and then out before settling down where they are today. I have no idea how replacing Jupiter with UberJup would affect this. It could be simulated, but it's beyond my ability and I know of no one who has considered this problem. (Which is not to say no one has -- the literature is very large.) So let's forget all that and look at the minimum changes case: UberJup sits where Jupiter sits. The inner planets are basically unaffected in their orbits. The asteroid belt is probably nearly empty. Outside UberJup's orbit there are probably some gas giants and neptunes, but their arrangement and number is probably different than the Solar System's. The arrangement has a reasonable chance of being stable. One potentially big change is that the clearing of the asteroid belt would probably have resulted an an increased very early bombardment of Earth, so the Earth might be a few percent more massive than it is today. It might also have a second natural satellite, though much smaller than the Moon. Finally -- here's where chaos comes in -- there's a good chance the Moon would not exist at all. It appears that the formation of the Moon happened due to a glancing strike of one of the last planetary embryos on a nearly complete Earth. This splashed a lot of matter into orbit, some of which coalesced into our massive Moon. It appears this is a fairly low probability event, so the presence of UberJup might well have erased our giant Moon. Mark OlsonMark Olson (Not a scientist, just using locigal thinking. And I do excuse for my grammer misstakes.) I think, if Jupiter gathers enough mass most of the closer objects (the asteriod belt between Jupiter and Mars) might be pulled towards the planet, now a brown dwarf, and start to orbit it. Planets close enough like Satern and Mars might see change in their orbit pattern. Mars might even become a "moon" of Jupiter. Jupiter will probably have stronger and more extreme effect on weather conditions on Earth. Like a bigger version of the moon. The eco system of Earth will probably be highly effected. Maybe in time the sun and Jupiter enter a binary star realationship, as the sun and Jupiter might be pulled together because of their gravitatinal pull. And some of the smaller planets might be swung out of the solar system because of changing gravitatinal pulls. NoodleNoodle $\begingroup$ Hi, Noodle, welcome to the Worldbuilding SE. Because this question has the hard-science tag, answers must be rigorous, well-cited, and consistent with the best scientific theories available. You could improve your answer by adding some mathematics to back up your speculations, and providing your reasons for thinking the ecosystem of the earth would be affected. Also, the parenthetical tip at the end is inappropriate for this question type, because the asker has already specified that they want to respect the rules (natural laws) of the actual world. $\endgroup$ – SudoSedWinifred Apr 5 '18 at 9:58 Not the answer you're looking for? Browse other questions tagged science-based planets hard-science astronomy orbital-mechanics or ask your own question. What would happen to the moon if the Earth was captured by Jupiter? The Solar System, Great Lakes Earth How would other planets in a solar system be affected by the disappearance of one planet? Making a slow orbit around a large gas giant Possibility of a large as of yet undetected object headed to intercept our solar system Can you have an eccentric horseshoe orbit for long? Under what conditions might a planet orbiting a red dwarf star NOT be tidally locked? Are Trojan Planets Possible? Are Habitable Trojan Planets Possible? Explaining a Low-Mass Brown Dwarf Brown Dwarf Star vs Gas Giant as parent for habitable moon
CommonCrawl
Components in an expression Expand binomial expressions Expand further binomial expressions Expand perfect squares Expand difference of two squares Identify greatest common algebraic factor Factor algebraic terms Factor algebraic expressions CanadaON Further binomials When we previously came across binomial products, we learnt that we can use the distributive law twice in order to expand the product into four terms. $\left(ax+b\right)\left(cx+d\right)$(ax+b)(cx+d) $=$= $ax\left(cx+d\right)+b\left(cx+d\right)$ax(cx+d)+b(cx+d) $=$= $acx^2+adx+bcx+bd$acx2+adx+bcx+bd $=$= $acx^2+\left(ad+bc\right)x+bd$acx2+(ad+bc)x+bd In this case, the variable $x$x appears in both binomial expressions, so we may simplify the expansion further by combining $adx+bcx$adx+bcx into $\left(ad+bc\right)x$(ad+bc)x. We could instead, however, have the following situation. $\left(ax+b\right)\left(cy+d\right)$(ax+b)(cy+d) $=$= $ax\left(cy+d\right)+b\left(cy+d\right)$ax(cy+d)+b(cy+d) $=$= $acxy+adx+bcy+bd$acxy+adx+bcy+bd Here you'll notice that we can't simplify the expression any further, since there are no like terms. An Alternate Approach We can also expand binomial product, still using the distributive law, by multiplying both terms in the first set of brackets by both terms in the second set of brackets, as shown in the picture below for the product $\left(x+5\right)\left(x+2\right)$(x+5)(x+2) By expanding in this way we will get the result $x^2+2x+5x+10$x2+2x+5x+10, the same result we would have obtained using the previous method. You may prefer to use this alternate method since it combines two iterations of the distributive law into one line of working. Now let's have a look at some worked examples where there are different variables in each binomial, or where further simplification is required after expanding. Expand and simplify the following: $\left(4r+7\right)\left(7s+2\right)$(4r+7)(7s+2) Reveal Solution $\left(2n+5\right)\left(5n+2\right)-4$(2n+5)(5n+2)−4 Expand and simplify: $6\left(\frac{x}{2}-2\right)\left(x-2\right)+2x$6(x2​−2)(x−2)+2x 10P.QR1.01 Expand and simplify second-degree polynomial expressions involving one variable that consist of the product of two binomials [e.g., (2x + 3)(x + 4)] or the square of a binomial [e.g., (x + 3)^2], using a variety of tools and strategies
CommonCrawl
3D finite element coupled analysis model for geotechnical and complex structural problems of historic masonry structures: conservation of Abu Serga church, Cairo, Egypt Sayed Hemeda ORCID: orcid.org/0000-0003-0308-92851 This research presents the damage mechanism of a historical masonry architecture induced by differential settlement based on 3D FE analysis. The purpose of the study was to investigate the behavior fully-saturated soft clays subjected to self-weight loading from an old masonry structure of Abu Serga church which is the oldest church in Egypt dating back to the fifth century A.D and located in old Cairo area in Cairo city. The church gains its high prestige to having been constructed upon the Holy crypt of the Holy Family where they stayed during their sojourn in Egypt. The main objective of the present study is too accurately record and analysis the geotechnical problems and induced structural failure mechanisms observed and calculated in the field, experimental and numerical studies. The land area is also susceptible to floods. Numerical analysis for such geotechnical problems is largely expected to contribute to the conservation of cultural heritages. The present research presents an attempt and pilot study to design the PLAXIS 3D FE model to simulate ground problems, and to distort and analyze the stress of the complex structure of the Abu Serga church, which is loaded on plane level. Plastic modeling or Mohr–Coulomb model in advanced soil was used during the various stages of numerical analysis. Results are recorded and discussed with respect to stress and volumetric behavior of soil. Finally, the study represents the design studies and implementation of the inter-organizational retrofitting intervention and strengthening project for the oldest Coptic church in Egypt. Historical monuments are invariably exposed to the influence of the geological environment. Given the lifespan of such structures, several dynamic geological processes (weathering/erosion, surface movements and earthquakes) usually have a dramatic impact on the integrity of the monuments. The protection of monuments requires special approaches in terms of adaptation of the engineering interventions to the historical environment and the lifetime of such interventions. The significant cost and implicit long-term effectiveness of engineering schemes for the protection of historical monuments necessitates integrated approaches requiring on-going validation of the design. The co-operation between the designer and the contractor during construction and long-term performance monitoring are key components for the success of such undertakings. Structural damage to the architectural heritage is often caused by the displacement of the earth's soil, its differential settlement, its rotation, or any other effect of the interaction between the structure and the soil. Although it is necessary to examine both the shear resistance and the underlying settlements of any structure, the research is very limited and focuses on mechanisms of failure of superstructures only [1,2,3,4,5,6,7]. To determine the magnitude of stresses, analyze the deformation and settlement of the soft silty clay soil and the superstructure response, an analytical coupled model of geotechnical and structural engineering is presented in detail. Geotechnical numerical modeling of complex soil structure problems requires advanced three-dimensional advanced soil models. PLAXIS 3D (PLAXIS v.b 2018) was used to calculate the soil settlement due to consolidation and the impact of its accompanied pressures and stresses on the superstructure. It is a program produced for the geotechnical construction plan and inquired about it and was used late as part of the structural and geotechnical survey. The Mohr–Coulomb model is used for both static assembly and rigidity inspection. The code contains a useful methodology for the programmed batching drive, called Load Advancement, which we used here [8,9,10]. Constitutive models are the key-stone not only for understanding the mechanical behavior of soils but also for carrying out numerical predictions by means of the FE method [11, 12]. Since 1970s, there are extensive studies on elastic–plastic model about saturation soil under dynamic loading. Building model under monotonic loading and using relatively complicated hardening law, such as the model based on modifier Cambridge model by Carter [13], the Desai model with single yield surface built in 1984 [14]. The dynamic model based on other types of plasticity theory, such as multi-surface model built by Mroz et al. and Provest [15, 16], secondary loading surface model built by Hashiguchi in 1993, the plasticity model of sand based on multi-mechanism conception under cyclic loading by Kabilamany, Pastor et al. [17, 18]. The evaluation methodology which has been followed for the structural rehabilitation of Abu Serga church (which is located in old Cairo area in Cairo) comprised the following phases/actions, as summarized in Table 1. Table 1 The evaluation methodology which has been followed for the structural rehabilitation of Abu Serga church Methodology established for the study of historic masonry structures were assessed in terms of their reliability, accuracy and effectiveness by comparing analytical results with experimental and empirical data [19, 20]. Geotechnical conditions and monitoring The subsurface conditions of the Abu Serga church consists of multi layers of thick soft clay mixed with variable sand. The fine sandy layers are shown to the middle at a depth of 6 m below the floor level of the chapel. The subsurface water appears at a depth of 1.8 m [21]. Figure 1 shows the ERT-3 reflection model on the main street of Mar Girgis, at a height of 5 m above the level of the ground floor of the church. ERT 2D model for profile ERT-3 at Abu serga church The geotechnical investigations carried out on the extracted soil samples. Data was collected from five (5) (boreholes), four (4) (PCPT/CPT), and eighteen (18) Undisturbed Samples (US). The results of laboratory tests were selected over eleven (11) grain size distributions, five (5) oedometer tests, three (3) direct shear tests, six (6) triaxial CU+ u tests and five (5) triaxial UU tests. The geotechnical testing has been carried out in the Soil Mechanics Laboratory of Faculty of Engineering, Cairo University. The physical and geomechanical characteristics of bearing soil layers under church are presented and summarized in details in Table 2. Table 2 Geotechnical characterization of the soil layers underneath the (St. Sergius) Abu Serga church in old Cairo area CPT and CPTu data reveal that the soft clay soil layer is characterized by a tip resistance (qc) lower than 1 MPa and the friction ratio is between 1 and 3. For Sand Layers: Atterberg limits, liquid limit (WL) = 20, plastic limit (Wp) = 0, dry unit weight γdry = 18 kN/m3, saturated unit weight γsat = 21 kN/m3. For the elastic parameters, Modulus of Elasticity E = 6000 KN/m2 and Poisson's ratio υ = 0.35. For the shear strength parameters of this sand, the cohesion of grain particles c = 0 kN/m2 and internal friction angle ϕ = 34. By the dewatering project in 2000, differential soil consolidation settlements were recorded between different parts of the subsoil and structures of Abu Serga church and other surrounding churches and chapels in the area. Settlements have been calculated up to 0.9 cm at the areas of filter wall in the old Cairo archaeological area. Engineering properties of building materials Another important parameter, necessary for the complete documentation of the structure and for understanding its behavior and response due to the subsoil settlement, is the identification of construction and building material engineering properties, which may have different characteristics depending on construction phases of the historic masonry structure. Table 3 summarized the engineering properties of the different construction and building materials Table 3 Engineering properties of the different construction and building materials of Abu Serga church Thirty brick samples and specimens have been collected from different locations in the structure of Abu Serga church, and the physical and mechanical testing have been achieved in the Laboratory of Building Materials in Faculty of Engineering in Cairo University by the author. The averaged results indicated that; (1) Physical properties, for specific gravity (Gs) it is ranged between 1.8 and 2.0 g/cm3, water absorption (Wa) is 20.1%, Porosity (n) is 27%. (2) Mechanical properties, uniaxial compressive strength (σc) is 1.6–4.7 MPa, Brazilian splitting tensile strength (σt) is 1.8 MPa, primary wave velocity (Vp) is 1.71 km/s, static Young's modulus (E) is 8.4 GPa, dynamic Young's modulus (Edy) is 2.4 GPa, and Shear modulus (G) is 917 MPa. The results from the physical and mechanical testing referred that the main construction materials of the church which is the bricks are in advanced state of deterioration and demand a necessary strengthening and retrofitting interventions. Red bricks and the hydraulic mortars were of the most important construction materials used in the Coptic churches including the subject of the present study "Abu Serga church" in the old Cairo. Generally the studied fired brick is formed mainly of quartz and feldspar grains. These grains are embedded in a ferruginous dark brownish groundmass formed mainly of iron oxides (hematite) and burnt clays. The bricks have various colors and dimensions and formed mainly from local raw material (Nile sediments) together with some additives (rice hush and/or plant ash to improve their properties. They are of medium density, high porosity due to the weathering activities (subsurface water and salt weathering. They have wide range values of their physical and engineering characteristics (e.g. specific density, water absorption, porosity, ultrasonic velocity uniaxial compressive strength σc, static modulus of elasticity, dynamic modulus of elasticity, and shear stress). Ten marble samples and specimens have been collected from the fallen fragments and from different deteriorated locations in the marble columns inside Abu Serga church. The averaged results indicated that; (1) Physical properties, for specific gravity (Gs) it is ranged between 2.6 to 2.8 g/cm3, Water absorption (Wa) is 12%, Porosity (n) is 32%. (2) Mechanical properties, uniaxial compressive strength (σc) is 16 MPa, Brazilian splitting tensile strength (σt) is 6 MPa, primary wave velocity (Vp) is 2.87 km/s, static Young's modulus (E) is 30 GPa, dynamic Young's modulus (Edy) is 11 GPa, and Shear modulus (G) is 1195 MPa. The results indicated a very poor mechanical characterization of these marble columns which could affect the stability of these structural columns and induced the deformation patterns which is obvious. Four wooden samples and specimens have been collected from different locations in the roof of Abu Serga church. The field observation and the averaged results of the mechanical testing indicated that the structural wooden beams which support the roof are deflected in high value due to the overloading and the material decay and degradation; (1) physical properties, for specific gravity (Gs) it is ranged between 0.64 to 0.69 g/cm3, Water absorption (Wa) is 30%. (2) Mechanical properties, uniaxial compressive strength (σc) is 8 MPa, Brazilian splitting tensile strength (σt) is 3 MPa, Static Young's modulus (E) is 7 GPa. The architectural design of Abu Serga church Abu Serga church is a small chapel with a length of 29.4 m and a width of 17 m and a height of 15 m. The ground floor is about 1.5 m under the surrounding alleys and 4.5 m down St. George's Main Street. It features a typical basilica design with a gallery leading to a nave with two side aisles, and these passages are separated by twelve equal marble columns of row. Three sanctuaries occupying the eastern side of the church. In the northern and southern ones, there are internal underground stairs leading to the Holly crypt. The height of the Holly crypt is about 6.5 × 5 × 2.5 m with two columns rows containing a longitudinal series of arches on each row. These two rows of columns are divided into three long shallow domes, which can be considered a plateau with northern and southern passages. [22] Modern architectural surveys and studies presented in this study indicated that the length of the church is 29.40 m and its total width, including the external open court accompanying 24.50 m, as shown in Figs. 2, 3, 4, and 5. a Plans or base map of the Ground floor and b first floor a Plans of the Holly crypt, and b the roof of the church a Cross section 1–1. And b CS 2–2. Look Fig. 3 a Representative 3Dimension model of the church display its two main entrances. b 3Dimension representative model of the church, display the main northern entrance State of Abu Serga church preservation Numerous local cracks and deformation patterns were observed and recorded mainly during the old fluctuations of the Nile before to the construction of the Aswan High Dam in 1968 (ancient dams that caused the loading and unloading of the underground layers under the historic building structures) and during the water removal project (Contract 102 in 2000) to reduce groundwater in the Old Cairo area. The main problems of the structural elements and material decay of the structural component of the historic masonry structure of Abu Serga church can be summarized as follow: Abu Serga church suffered great deterioration due to the extensive cracking due to the settlement of the subsoil and the surface movements. Almost causes of structural deficiency and damage seem to be four of the mechanical static and dynamic actions, affecting the superstructure of the church: Differential consolidation settlement due to the plane loading of the superstructure on the full saturated clay soil and expulsion of the pore water, also the differential settlement due the shear failure of the soil layer under the heavy loading and the poor geotechnical characteristics of the soft bearing clay layer, also the fluctuations of the subsurface water can reduce the bearing capacity of the bearing soil to 50%. The dewatering project in the old Cairo area in 2000 was one of the causes of soil settlement due to consolidation of the thick fully saturated clay layer. The internal deformities of the two facades of the church are clear, as shown in Fig. 6. Settlement and the rotation and inclination of the structural marble columns and the different cracks in the arches inside the church are well observed; also the vertical cracks in the lintels inside church are shown in Fig. 7. Deformations and crack patterns of the bearing brick walls of Abu Serga church Cracks patterns and deformation in the different structure and architectural elements inside the church Seismic loading, according to historical facts the powerful earthquakes and the recent earthquakes in particularly the Dahshur earthquake 1992 and Aqaba earthquake 1995, that have stuck old Cairo area, caused small or medium damages to Abu Serga church. Degradation of construction and building materials, moisture often plays the important and main role of the degradation of the building materials. The main source of the humidity is the subsurface water and high groundwater level for a long period of time. The technical evaluation of the brick walls provided a general case of in and out of plane deformations. The excavations of the underground metro; many damage and deformation patterns well observed through the structure of the church due to excavation induced subsidence. Constitutive modeling (numerical analysis) In this study; PLAXIS 3D performed with a plastic material model to determine the behavior and nonlinear response of the saturated soft silty clay soil and masonry structure of the church. Plaxis is a commercially available program which is using finite element method FEM. Plaxis is using different soil models to define soil behavior such as Mohr–Coulomb Model, Hardening Soil Model, Soft Soil Model, Soft Soil Creep Model, Jointed Rock Model and Modified Cam-Clay Model. Mohr–Coulomb Model is chosen for this study because it is commonly used and not required extra soil parameters. The Linear-Elastic Perfectly-Plastic Mohr–Coulomb demonstrate includes five information parameters, i.e. Young's modulus E and Poisson's ratio nu (ν) for soil flexibility; Cohesion c, friction angle phi (φ) and dilatancy psi (ψ) have to do with soil shear behaviour. The Mohr–Coulomb model represents a 'first-order' approximation of soil or rock behaviour. Mohr–Coulomb model is a straightforward and pertinent to three-dimensional stress space model to depict the plastic conduct of earth soil and its immersed conduct and related stream. As to quality conduct, this model performs better. This model is applicable to analyses the stability of shallow foundations and the soil problems. For Mohr–Coulomb flow rule is defined through the dilatancy angle of the soil. In soft soils volumetric plastic strains on shearing are compressive (negative dilation) whilst Mohr–Coulomb model will predict continuous dilation. Numerical modeling of the plane strain using the PLAXIS 3D (version 2018) was adopted. Consists of 15 nodes of finite trigonometric elements with a medium precision network to reduce the calculation time. All the geotechnical characteristics of the soil layers and engineering characterization of the building materials of Abu Serga church are listed in Tables 2 and 3. The Mohr–Coulomb constitutive law was chosen to describe the behavior of saturated soft clay behavior. The water is located at 1.8 m deep. The settlement was monitored according to the time given for the actual settlement [3]. Subsoil behavior is studied in details. The masonry structure of the church was modeled using solid element module, the solid element properties taken as defined above in design criteria [23, 24]. The church was modeled using frame elements for marble columns and shell elements for the ceilings with same properties in the mentioned reports above. Rigid links are used to connect all brick walls together to act as one unit, rigid links defined with very high moment of inertial and weightless. Elements cross sections solid elements thickness varies from 160 to 70 cm along the height. Beams cross sections 25 cm * 80 cm. Slab thickness is 16 cm. Results of numerical analysis Successful use of a mathematical analytical model can provide information on the type, extent and location of damage and unsafe zones and safety levels. The results of the numerical analysis of church are shown as originally designed that some of the surface subsidence occurred during its construction on thick layers of soft clay (6 m) and because of the drainage or dewatering project in 2000. Displacement developed on the surface, above the maximum value of about 122.85 mm, is the total consolidation settlement and failure of local shearing of the soil due to the in-plane loading. Figure 8 shows the geometry and FE discretization of the 3D model and the deformed mesh and the calculated vertical displacement Uy of the saturated silty clay soil with the structure loading and appears distributed under the superstructure of the church in Figs. 9 and 10, the extreme value Uy is 122.85 mm. The volumetric strain εv distribution of the subsoil is shown in, with a maximum value of 4.50%. The initial effective compressive stresses was determined at the middle of the clay layer depth, where σv = 60 kN/m2. The effective mean stresses P on the subsoil is shown below the structure with a maximum value of 192.85 kN/m2, as shown in Fig. 11. The average effective stress is also calculated and the maximum value is 192.85 kN/m2. 3 finite elements discretization of the PLAXIS model and deformed generated mesh The maximum vertical displacement Uy is 122.85 mm Differential vertical displacement patterns in the bearing silty clay soil The maximum effective mean stress is 192.85 kN/m2 The active pore pressure Pactive distribution in the ground, where the maximum value of Pactive is 240 kN/m2 as shown in Fig. 12. The extreme active pore pressure Pactive value is 240 kN/m2 Settlement calculations determined from the empirical study give the same settlement value, Eq. 1 [25]; $$\Delta {\text{H}} = \frac{\text{CcH}}{{1 + {\text{e}}_{ 0} }}\log \left( {\frac{{{\text{P}}_{0} + \Delta {\text{P}}}}{{{\text{P}}_{0} }}} \right)$$ ΔΗ is the consolidation adjustment, Cc is the compression index, H is the height of the clay soil layer, e0 is the initial vacuum ratio, P0 is the initial effective vertical stress in the middle of the mud layer depth, ΔP is the change in the vertical effective stress [26]. $$\begin{aligned} \Delta {\rm H}\;{\text{is the consolidation settlement of the soft clay layer}}\, & = \,{{\left( { 5. 5\, \times \,0.3} \right)} \mathord{\left/ {\vphantom {{\left( { 5.5\, \times \,0.3} \right)} {\left( { 1\, + \, 1.84} \right)}}} \right. \kern-0pt} {\left( { 1\, + \, 1.84} \right)}}\, \times \,\left( {{\text{log1}}0{ (60} + 60 )/ 60} \right)\, \\ & = \, 165\,{\text{mm}}. \\ \end{aligned}$$ where the thickness of layer H is 5.5 m, the compression index Cc = 0.3, the initial stress in the depth of the clay layer is P0 = 60 kN/m2. The initial void ratio is 1.84. The subsidence and settlement of the subsoil may affect seriously the superstructure of the church. The main reasons were the consolidation of subsoil clay layers. Also the shear failure characteristics of subsoil may lead to the ground subsidence. The results of the numerical analysis indicated that the distribution of the Uy vertical displacement of the church superstructure, with a maximum value of 67.62 mm, and minimum Uy value is 67.62 mm as shown in Fig. 13. While the maximum value of horizontal displacement is 8.91 mm, represent the distribution of the horizontal displacement Ux of the structure of the Abo Serga church, the maximum value of Ux is 8.91 mm and the minimum value of Ux is 2.83 mm. The maximum vertical displacements of the superstructure are 67.62 mm Figure 14 illustrates the normal axial force N1 on the superstructure of the church; the maximum/minimum principal stresses, the maximum value is 784.85 kN/m and the minimum value is 380.88 kN/m. The extreme value of the axial force N1 on the superstructure of the church is 784.85 kN/m The shear force Q1 on the superstructure of the church, is with a maximum value of 305.38 kN/m, and the minimum value is 208.96 kN/m as shown in Fig. 15 which represent the maximum/minimum principal stresses. Figure 16 shows the maximum/minimum principal stresses, the bending moments of M1 on the superstructure of the church, with a maximum value of 104.88 kN/m and the minimum value is 80.32 kN/m. bending moment 105 kN per length unit is relatively high value, it may be the main cause of the out-plane deformation and the cracks patterns on the main facades of Abu Serga church, and it observed and recorded from the field and experimental studies. The extreme value of the shear forces Q on the superstructure of the church is 305.38 kN/m The extreme value of the Bending moment on the superstructure of the church is 104.88 kN/m The computed static surface ground displacements under Abu Serga church are in high values: maximum total vertical displacements is 122.85 mm, which is not acceptable or permissable. Many researches like [27,28,29,30,31,32] discussed the permissable maximum settlement for the shallow foundations in clay soils; and indicated that, for the loading bearing walls, the permissible maximum settlement is 60 mm in case of isolated footings, and 125 mm in case of raft foundations. The maximum normal axial force N1 on the structure is 784.85 kN/m is very close to the uniaial compressive strength of the original brick (1400 kN/m2). Moreover shear force Q1 is on the superstructure of the church, with a maximum value of 305.38 kN/m, which is also close to the measured shear strength of the rock material (600 kN/m2). The results also indicated that the overstress state is beyond the elastic regime. With a global factor of safety (FoS = strength of component/load on component) equal to 1.78 (< 2) the Abu Serga church should not be considered as safe under static conditions, an FoS of 2 means that a component will fail at twice the design load. In conclusion the detailed analysis of the Abu Serga church proved that these important monuments present low safety factors for both static loading and soil consolidation settlement. Consequently a well-focused strengthening and retrofitting program is deemed necessary. It is deemed necessary to upgrade the safety reserves due to the special nature of the structure. From the results of the numerical modeling, indicated that the structural deficiencies in the superstructure of the oldest Cairo church, mainly the diagonal, shear and vertical cracks and other distortions within the plane, are mainly induced by the differential settlement of the full saturated clay subsoil consolidation. The technical assessment revealed that almost all level masonry structural walls presented a brittle mode of failure and more than that, from the first level were of "weak and soft stories" type. Strengthening (restoring) the procedures of the Abu Serga church The Intervention and strengthening measures and work for the church included the improvement of the subsoil layers, reinforcement of the shallow foundations and the strengthening of the superstructure the church. Table 4 summarized the main aspects of structural strengthening of architectural heritage. Table 4 Aspects of structural strengthening of architectural heritage Some details of the strengthening and intervention retrofitting project of Abu Serga church could be given as follow: Improvement of the of soft clay soil with jet injection techniques and liquid normal Portland cement. When the structural elements are capable of taking care of the total weight of the historical construction structure. Improvement of shallow foundations found by low pressure injection of hydraulic lime mortar was necessary. It was important to design the stitching system to connect the brick walls to the superstructure elements together. Extensive grouting was undertaken for filling of the wide cracks across the highly disturbed walls zones penetrating 200–300 mm into the walls. Grouting was carried out prior to the prestressing of the anchors to work load levels. Our previous experimental study referred that the mix of hydraulic lime + sand + brick dust + small proportion of white cement 3:1:1:0.2 respectively gave the best results under the mechanical testing. The hydraulic lime based grouts (due to their improved bond properties with the in situ materials) become more important due to the durability ensured by the use of materials that are compatible with the existing ones from the physical–chemical point of view [33]. Restoration and improvement the connections between brick walls. To prevent wall cracks in the longitudinal direction, reinforce the walls with high-strength strips and insert at least 50 mm in the wall joints in some layers. The number of common layers entered in the ranges was calculated according to the additional tension obtained through the analysis, as shown in Fig. 17. It recommended the inclusion of lead strips with a thickness of about a few millimeters in joints per 1.5–1.5 m above ground to ceiling in double walls. For the stone sections, it should not be isolated. This causes fins to be caused by small, low-magnitude earthquakes, but they are stopped for centuries and prevented from ascending and damaging the structure. After the structural reconstruction of the brick walls, it was necessary to reset all surfaces using lime mortar and remove previous interventions using cement paste. The design of the walls stitching and strengthening with different techniques and materials Straightening up and reinforcing marble and granite columns with steel ring beams. Stone stitching. Restoration of frescoes and icons. Restoration of timbers. Figures 18 and 19 illustrate the state of preservation of the church before the strengthening and retrofitting intervention project. Figures 20 and 21 illustrate the strengthening and intervention retrofitting processes and measures which had been done during the restoration project for Abu Serga church. The building materials decay and extensive deterioration before the restoration and strengthening measures Extensive disintegration and degradation of the bricks walls of the church due to the subsurface water rising Abu Serga church's wall stitching and strengthening The strengthening of the structural twelve columns and brick pillars The present preservation state of the church is provided after the complete installation of remedial measures for therapeutic intervention as shown in Figs. 22 and 23, especially in the cleaning of walls, removal of salt and stitching, and supporting the reinforcement of marble columns with designed steel rings. Abu Serg church after the retrofitting intervention project, the nave of the church Strengthening and reinforcing of structural marble and granite columns with steel ring beams The assessment of the stability of the ground (bearing soil) and the induced structural deficiency were examined in the Abu Serga church. A three-dimensional FEM model was used to conduct several types of plastic and unit analysis. The long-term structural deformation of the structure has been analyzed over the past centuries to find an appropriate way to improve and modify it. The numerical model has been calibrated based on the on-site measurement data to determine the corresponding model parameters. Different ways of improving the parameter have been investigated. It is therefore possible to conclude that modeling soft mud behavior under the Abu Serga script by numerical analysis is appropriate for understanding the geotechnical behavior of the problematic soil type and structural behavior of the structure above it. Various types of soil reinforcement and structural strengthening techniques have been tested to better fit this structure. The results of the analysis indicated that the deteriorated cement plaster layers that covered the interior brick walls during the 1960s should be removed due to its high damage and decay due to the high rise of subsurface water and humidity, as well as the damage to the brick walls due to the consolidation settlement of the bearing soil as well as the geotechnical and structural effect of the earthquakes in particular the Dahshur 1992 and Aqaba 1995 earthquakes. In conclusion, the detailed analysis of the church of Abu Serga proved that these important Coptic architectural heritages represent low safety factors for both static loading and soil settlement. Accordingly, the existence of a well-focused strengthening and retrofitting intervention program was very necessary and essential. Ilies NM, Popa A. Geotechnical problems of historical buildings from Transylavania. In: Bilota A, editor. Geotechnical engineering for the preservation of the monuments and historic sites. London: Taylor & Francis Group; 2013. Nasser H, Marawan H, Deck O. Influence of differential settlements on masonry structures. Computat Model Concr Struct. 2014. https://doi.org/10.1201/b16645-93. Müthing N, Zhao C, Hölter R, Schanz T. Settlement prediction for an embankment on soft clay. Comput Geotech. 2018. https://doi.org/10.1016/j.compgeo.2017.06.002. Brinkgreve RBJ, Engin E, Swolfs MW. Material models manual. Plaxis 3D Plaxis bv, Delft, Netherlands. 2011. Kalai M, Bouassida M, Tabchouche S. Numerical modeling of Tunis soft clay. Geotech Eng J SEAGS AGSSE A. 2015;46(4):87–95. Duncan JM, Chang C. Nonlinear analysis of stress and strain in soils". J Soil Mech Found Div. 1970;96(SM5):1629–54. Vakili KN, Barciago T, Lavason AA, Schanz J. A practical approach to constitutive models for the analysis of geotechnical problems. In: 3rd international conference on computational geomechanics (ComGeo III), vol 1, Krakow, Poland. 2013. Hemeda S, Pitilakis K. Serapeum temple and the ancient annex daughter library in Alexandria, Egypt: geotechnical–geophysical investigations and stability analysis under static and seismic conditions. Eng Geol. 2010;113:33–43. Hemeda S, Pitilakis K, Bakasis E. Three-dimensional stability analysis of the central rotunda of the catacombs of Kom El-Shoqafa, Alexandria, Egypt. In: 5th international conference in geotechnical earthquake engineering and soil dynamics, May 24–29 2010, San Diego, California, USA. Yamamoto K, Tabata K, Kitamura R. Finite Element Analysis of Seepage and Deformation Properties in Shirasu Ground for the Situations of Sheet Pile Excavation", Elsevier BV. 2001. Kolymbas D. Constitutive modeling of granular materials. Berlin: Springer; 2017. Di Prisco C, Imposimato S, Aifantis EC. A visco-plastic constitutive model for granular soils modified according to non-local and gradient approaches. Int J Numer Anal Methods Geomech. 2002;26(2):121–38. Carter JP, Booker JR, Wrothu CP. A critical state soil model for cyclic loading. Soil Mech Transient Cyclic Load. 1982;2(1):35–62. Desai CS, Gallagher RH. Mechanics of engineering. London. 1984. Mroz Z, Norris VA, Zienkiewicz OC. An anisotropic critical state model for soils subjected to cyclic loading. Geotechnique. 1981;31(4):451–5. Provest JH. A simple plastic theory for frictional cohesionless soils. Soil Dyn Earthq Eng. 1985;4(1):9–11. Iai S, Matsunaga Y, Kaneoka T. Strain space plasticity model for cyclic mobility. Soils Found. 1992;32(2):1–9. Paster M, Zienkiewicz OC, Chan AHC. Generalized plasticity and the modeling of soil behavior. Int J Numer Anal Meth Geomech. 1990;14(1):151–60. Boscato G, Dal Cin A, Riva G, Russo S, Sciarretta F. Knowledge of the construction technique of the multiple leaf masonry façades of palazzo Ducale in Venice with ND and MD tests. Adv Mater Res. 2014;919–921:318–24. Bosiljkov V, Uranjek M, Žarnića R, Bokan-Bosiljkov V. An integrated diagnostic approach for the assessment of historic masonry structures. J Cult Herit. 2010;11(3):239–49. Hemeda S, Pitilakis K. Geophysical Investigations at Cairo's Oldest, the Church of Abu Serga (St. Sergius), Cairo, Egypt. Res Nondestruct Eval. 2017;28(3):123–49. https://doi.org/10.1080/09349847.2016.1143991. Hemeda S. Seismic hazard analysis for preservation of architectural heritage: the case of the Cairo's oldest Abu Serga church. Int J Civil Eng Sci. 2014;3:2. Milani G, Valente M, Alessandri C. The narthex of the Church of the Nativity in Bethlehem: a non-linear finite element approach to predict the structural damage. Comput Struct. 2018;207:3–18. Hemeda S. Non-linear static analysis and seismic performance of modern architectural heritage in Egypt. Mediterran Archaeol Archaeometry. 2016;16(3):1–15. Sohan K, Das BM. Principles of geotechnical engineering. 9th ed. Boston: Cengage Learning; 2018. Rajapakse R. Consolidation settlement of foundations. New York: Elsevier; 2016. American Society of Civil Engineers. Guidelines for instrumentation and measurements for monitoring dam performance. Virginia: Reston; 2000. American Society for Testing and Materials, D 2435. Standard test method for one dimensional consolidation properties of soils. In: American Society for Testing and Materials (ASTM), Vol. 04.08, West Conshohocken, Pennsylvania; 1999. p. 210–9. Das BM. Principles of foundation engineering. 2nd ed. Boston: PWS-KENT Publishing Company; 1990. Fellenius BH. Recent advances in the design of piles for axial loads, dragloads, downdrag, and settlement. Ontario: ASCE and Port of NY&NJ Seminar1, Urkkada Technology Ltd; 1998. Holtz RD, Kovacs WD. Introduction to geotechnical engineering. New Jersey: Prentice Hall Inc., Englewood Cliffs; 1981. p. 309–90. Sitharam TG. Advanced foundation engineering. Bargalore: Indian Institute of Science; 2013. Hemeda S, El-banna S. Structural deficiency and intervention retrofitting measures of rubble filled masonry walls in Islamic historical buildings in Cairo. Mediterran Archaeol Archaeometry. 2014;14(1):235–46. The author read and approved the final manuscript. Availability of supporting data The author confirms that he is not currently in receipt of any research funding relating to the research presented in this manuscript. Architectural Conservation Department, Faculty of Archaeology, Cairo University, Giza, P.C 12613, Egypt Sayed Hemeda Search for Sayed Hemeda in: Correspondence to Sayed Hemeda. Hemeda, S. 3D finite element coupled analysis model for geotechnical and complex structural problems of historic masonry structures: conservation of Abu Serga church, Cairo, Egypt. Herit Sci 7, 6 (2019) doi:10.1186/s40494-019-0248-z The oldest Coptic church Geotechnical modeling Soil problems Problematic soils Soil settlement 3D constitutive models FE PLAXIS 3D Vertical displacement
CommonCrawl
DOI:10.1051/0004-6361/201321722 On double-degenerate type Ia supernova progenitors as supersoft X-ray sources. A population synthesis analysis using SeBa @article{Nielsen2013OnDT, title={On double-degenerate type Ia supernova progenitors as supersoft X-ray sources. A population synthesis analysis using SeBa}, author={M.T.B. Nielsen and Gijs Nelemans and Rasmus Voss and Silvia Toonen}, journal={Astronomy and Astrophysics}, pages={1-9} M. Nielsen, G. Nelemans, S. Toonen Published 8 October 2013 Context. The nature of the progenitors of type Ia supernova progenitors remains unclear. While it is usually agreed that singledegenerate progenitor systems would be luminous supersoft X-ray sources, it was recently suggested that double-degenerate progenitors might also go through a supersoft X-ray phase. Aims. We aim to examine the possibility of double-degenerate progenitor systems being supersoft X-ray systems, and place stringent upper limits on the maximally possible durations of any… Figures and Tables from this paper table A.1 Supernova Type Ia progenitors from merging double white dwarfs - Using a new population synthesis model S. Toonen, G. Nelemans, S. Zwart The study of Type Ia supernovae (SNIa) has lead to greatly improved insights into many fields in astrophysics, however a theoretical explanation of the origin of these events is still lacking. We… Observational Clues to the Progenitors of Type Ia Supernovae D. Maoz, F. Mannucci, G. Nelemans Type Ia supernovae (SNe Ia) are important distance indicators, element factories, cosmic-ray accelerators, kinetic-energy sources in galaxy evolution, and end points of stellar binary evolution. It… Next generation population synthesis of accreting white dwarfs – I. Hybrid calculations using bse + mesa Hai-liang Chen, T. Woods, L. Yungelson, M. Gilfanov, Zhanwen Han Accreting, nuclear-burning white dwarfs (WDs) have been deemed to be candidate progenitors of Type Ia supernovae (SNe Ia), and to account for supersoft X-ray sources, novae, etc. depending on their… Binary white dwarfs and decihertz gravitational wave observations: From the Hubble constant to supernova astrophysics A. Maselli, S. Marassi, M. Branchesi Astronomy & Astrophysics Context. Coalescences of binary white dwarfs represent a copious source of information for gravitational wave interferometers operating in the decihertz band. Moreover, according to the double… The white dwarf binary pathways survey – II. Radial velocities of 1453 FGK stars with white dwarf companions from LAMOST DR 4 A. Rebassa-Mansergas, J. Ren, X. Kong We present the second paper of a series of publications aiming at obtaining a better understanding regarding the nature of type Ia supernovae (SN Ia) progenitors by studying a large sample of… Identification of new Galactic symbiotic stars with SALT – I. Initial discoveries and other emission line objects B. Miszalski, J. Mikołajewska We introduce the first results from an ongoing, systematic survey for new symbiotic stars selected from the AAO/UKST SuperCOSMOS H$\alpha$ Survey (SHS). The survey aims to identify and characterise… The Evolution of Compact Binary Star Systems K. Postnov, L. Yungelson Living reviews in relativity The formation and evolution of compact binary stars consisting of white dwarfs, neutron stars, and black holes are reviewed, including their role as progenitors of cosmologically-important thermonuclear SN Ia and AM CVn-stars, which are thought to be the best verification binary GW sources for future low-frequency GW space interferometers. The astrophysical science case for a decihertz gravitational-wave detector I. Mandel, A. Sesana, A. Vecchio We discuss the astrophysical science case for a decihertz gravitational-wave mission. We focus on unique opportunities for scientific discovery in this frequency range, including probes of type IA… Progenitor constraints on the Type Ia supernova SN 2014J from Hubble Space Telescope H β and [O iii] observations O. Graur, T. Woods Monthly Notices of the Royal Astronomical Society: Letters Type Ia supernovae are understood to arise from the thermonuclear explosion of a carbon–oxygen white dwarf, yet the evolutionary mechanisms leading to such events remain unknown. Many proposed… View 4 excerpts, cites background, methods and results Obscuration of supersoft X-ray sources by circumbinary material : A way to hide Type Ia supernova progenitors? M. Nielsen, C. Dominik, G. Nelemans, R. Voss Context. The progenitors of Type Ia supernovae are usually assumed to be either a single white dwarf accreting from a non-degenerate companion (the single-degenerate channel) or the result of two… THE PROGENITORS OF TYPE Ia SUPERNOVAE. II. ARE THEY DOUBLE-DEGENERATE BINARIES? THE SYMBIOTIC CHANNEL R. D. Stefano In order for a white dwarf (WD) to achieve the Chandrasekhar mass, MC, and explode as a Type Ia supernova (SNIa), it must interact with another star, either accreting matter from or merging with it.… View 11 excerpts, references background, methods and results Upper limits on bolometric luminosities of three Type Ia supernova progenitors: new results in the ongoing Chandra archival search for Type Ia supernova progenitors M. Nielsen, R. Voss, G. Nelemans We present analysis of Chandra archival, pre-explosion data of the positions of three nearby (< 25 Mpc) type Ia supernovae, SN2011iv, SN2012cu & SN2012fr. No sources corresponding to the progenitors… THE PROGENITORS OF TYPE Ia SUPERNOVAE. I. ARE THEY SUPERSOFT SOURCES In a canonical model, the progenitors of Type Ia supernovae (SNe Ia) are accreting, nuclear-burning white dwarfs (NBWDs), which explode when the white dwarf reaches the Chandrasekhar mass, M{sub C} .… Discovery of the progenitor of the type Ia supernova 2007on R. Voss, G. Nelemans The discovery of an object in pre-supernova archival X-ray images at the position of the recent type Ia supernova in the elliptical galaxy NGC 1404 is reported, which favours the accretion model for this supernova, although the host galaxy is older than the age at which the explosions are predicted in the accreting models. A New Evolutionary Path to Type Ia Supernovae: A Helium-rich Supersoft X-Ray Source Channel I. Hachisu, M. Kato, K. Nomoto, Hideyuku Umeda We have found a new evolutionary path to Type Ia supernovae (SNe Ia) that has been overlooked in previous work. In this scenario, a carbon-oxygen white dwarf (C+O WD) is originated not from an… Circumstellar Material in Type Ia Supernovae via Sodium Absorption Features A. Sternberg, A. Gal-yam, G. Stringfellow It is shown that the velocity structure of absorbing material along the line of sight to 35 type Ia supernovae tends to be blueshifted, and these structures are likely signatures of gas outflows from the supernova progenitor systems. Upper limits on bolometric luminosities of 10 Type Ia supernova progenitors from Chandra observations We present an analysis of Chandra observations of the position of ten nearby (< 25 Mpc) type Ia supernovae, taken before the explosions. No sources corresponding to progenitors were found in any of… An upper limit on the contribution of accreting white dwarfs to the type Ia supernova rate M. Gilfanov, Á. Bogdán It is concluded that no more than about five per cent of type Ia supernovae in early-type galaxies can be produced by white dwarfs in accreting binary systems, unless their progenitors are much younger than the bulk of the stellar population in these galaxies, or explosions of sub-Chandrasekharwhite dwarfs make a significant contribution to the supernova rate.
CommonCrawl
On algebraic numbers Walter Rudin Exercise 2.2 To prove that the set of all algebraic numbers is countable, the hint provided is that there are finitely many equations of the form $$n+\left|a_0\right|+\left|a_1\right|+\dots+\left|a_n\right|=N. $$ where $a_i$ are coefficients, $n$ is degree of polynomial and $N$ is any positive integer. How is this analogous to the problem? real-analysis algebraic-numbers monty singh monty singhmonty singh $\begingroup$ What are the $|a_i|,n,$ and $N$? Also, do you know that a countable union of countable sets is countable? $\endgroup$ – Guido A. Sep 8 '18 at 5:32 $\begingroup$ @GuidoA. I have added the details. Yes, I am aware of that fact. $\endgroup$ – monty singh Sep 8 '18 at 5:38 $\begingroup$ I don't know about "analogous", but it is relevant, in that it implies that you can list all the polynomials. $\endgroup$ – Gerry Myerson Sep 8 '18 at 5:44 $\begingroup$ You left out the part before that, where a certain equation was defined, involving variables like $n, a_0, a_1, \dots$. Now, the hint tells us to show, for a given $N$, there are only finitely many of those equations satisfying $n+\left|a_0\right|+\left|a_1\right|+\dots+\left|a_n\right|=N.$. $\endgroup$ – GEdgar Sep 8 '18 at 13:14 I'm not entirely sure of what Rudin meant with the hint, but the general spirit of his approach is that we can separate algebraic numbers in terms of which degree the polynomials they are roots of have, and then separate the polynomials of a given degree in terms of their coefficients. All these will be countable, so when we take the union of them the result will remain countable. Concretely, let's first see that $\mathbb{Z}_n[X]$ the integers polynomials of grade $n$ are countable: note that the function that maps $a_nX^n + \dots + a_0$ to $(a_n, \dots, a_0) \in \mathbb{Z}^n$ is injective. Furthermore, since $$ \mathbb{Z}[X] = \bigcup_{n \geq 0}\mathbb{Z}_n[X] \cup \{0\} $$ we have that $\mathbb{Z}[X]$ itself is countable. Finally, the algebraic numbers can be written as the union of the roots of all polynomials with integer coefficients: $$ \mathbb{A} = \bigcup_{p \in \mathbb{Z}[X] \setminus \{0\}}p^{-1}(0) $$ Each preimage $p^{-1}(0)$ is finite, because $p$ has finite degree, and the union is countable because $\mathbb{Z}[X]$ is, so the result is countable itself. Guido A.Guido A. Show that for any $n\geqslant 1$, the set of roots of polynomials with integer coefficients of degree exactly $n$ is countable, then use that a countable union of countable sets is countable. FimpellizieriFimpellizieri An attempt. $a_0 z^n+ a_1z^{n-1}+...+a_{n -1}z+a_n=0$ where $a_{i}$ , $i=0,1,2,.....n,$ are integers, $a_0\not =0$. To every polynomial of degree $n$ corresponds a $n$-tuple $ (a_0,a_1,........, a_{n-1},a_n)$ (bijection). The set $T_n:=$ {$(a_0,a_1,...,a_{n-1},a_n)|a_0 \not =0$} $\subset \mathbb{Z^{n+1}}$. As a subset of a countable set, $T_n$ is countable. Let $t_{kn} \in T_n, k=1,2,.....$ Every $t_{kn}$ corresponds to a polynomial of degree $n$ which has $n$ roots. Arrange the roots in dictionary order: $z_1,z_2,.......z_n$. $ A_n:= ${$(t_{nk}, j)| 1\le j \le n$, $j$ integer, $k=1,2,...$} is as subset of $T_n×\mathbb{N}$ countable . Finally : $\cup_{n} A_n$, countable union of countable sets, is countable. Peter SzilasPeter Szilas Not the answer you're looking for? Browse other questions tagged real-analysis algebraic-numbers or ask your own question. Prove that the set of all algebraic numbers is countable Algebraic numbers as roots of polynomials of degree $n$ are countable. Prob. 13, Chap. 2 in Baby Rudin: Construct a compact set of real numbers whose limit points form a countable set Prove that the set of all algebraic numbers is countable: proof using fundamental theorem of algebra Proof verification on countability of algebraic numbers? Polynomial over a field whose power has coefficients in the ring of algebraic integers. Showing that set the of algebraic numbers are countable [Proof verification] Proof of hint in Rudin exercise concerning countability of algebraic numbers Prove the set of algebraic numbers is countable
CommonCrawl
Buffer-aware adaptive resource allocation scheme in LTE transmission systems Ruiyi Zhu1 & Jian Yang1 Dynamic resource allocation scheme is a key component of 3GPP long-term evolution (LTE) for satisfying quality-of-service (QoS) requirement as well as improving the system throughput. In this paper, a buffer-aware adaptive resource allocation scheme for LTE downlink transmission is proposed for improving the overall system throughput while guaranteeing the statistic QoS and keeping certain fairness among users. Specifically, the priorities of the users' data queues in the base station are ranked by their remaining life time or their queue overflow probability which is estimated by applying large deviation principle. An online measurement based algorithm which requires no statistical knowledge of the network conditions uses the queue priorities to dynamically allocate the resource blocks (RBs) for avoiding buffer overflow and providing statistic QoS guarantee. The simulation results show that the proposed algorithm improves the throughput and fairness while considerably reducing the average bit loss rate. Mobile communication technologies have been developed rapidly, and switched from the third generation (3G) of mobile communication systems to the long-term evolution (LTE) systems, which aims to provide high-data-rate, low-latency, packet-optimized radio-access, and flexible bandwidth deployments [1]. LTE system allows high flexibility in the resource allocation, which enables dynamic resource blocks (RBs) allocation among the potential users [2] and [3]. Conventional resource allocation schemes in wireless system are generally based on user's priority [4, 5]. They are designed according to user's channel status and QoS guarantee to maximize overall system throughput. However, providing fairness among users is another essential design consideration, although it usually sacrifices the system throughput and/or violates QoS requirements. Some resource allocation schemes based on buffer-aware can improve some of these performance metrics [6]. The resource allocation problem in wireless system has been widely addressed in some literatures, but it is still challenging in LTE system to design a buffer-aware resource allocation scheme for improving the performance including increasing the system throughput as large as possible, guaranteeing QoS requirements, and achieving fairness. In this paper, we propose a buffer-aware adaptive resource allocation scheme by jointly considering the user scheduling and RBs allocation to provide QoS guarantee in LTE transmission systems. In the aspect of user scheduling, considering that the finite buffer maximum size, each user's queue priority is ranked according to its remaining life time or its queue overflow probability which is estimated by applying large deviation principle. For RBs allocation, an online measurement based algorithm for dynamically allocating RBs is proposed for adjusting the service rates of the user queues in order to provide QoS guarantee. The goal is improving the total system throughput as large as possible while subjecting to provide QoS guarantee for different users and to guarantee certain fairness. In this paper, we consider multiuser resource allocation for the downlink in LTE systems. The scheduler at the base station is responsible for allocating resources to the different users as a function of the users' queue priority as well as the current channel conditions. There are many prior works on this problem. The classic scheduling algorithms include Round Robin (RR) algorithm [7], Max C/I algorithm [8], and proportional fair (PF) algorithm [9]. Although many works [10–13] apply multiuser diversity in user scheduling for maximizing system throughput, the system buffer size in these schemes is assumed to be infinite, that is to say, any arriving bit can be buffered and any bit loss due to buffer overflow will not happen. This assumption may not be reasonable since the buffer size is limited in the transceivers. Resource allocation for finite buffer space has been discussed in the literature related to the wireless network. The authors in [14] design a new LTE buffer aware scheduler to opportunistically assign RBs for video streaming applications in order to maximize the average video quality. In [15], the buffer occupancy based approach is presented to achieve video rate adaptation, while in [16], a dynamic programming framework is applied to study the buffer v s. QoS tradeoff for wireless media streaming in a single user scenario. These papers cited above mainly focus on video traffic. But the eNB in the practical situation schedules and transmits general data traffic besides video traffic. There are several related works for packet scheduling and resource allocation in wireless data systems. In [17], M. Andrews et al. focus on how to adapt MaxWeight algorithm to the multicarrier wireless data systems, and a simple variant was introduced into the objective for reducing resource wastage. In [18], M. Realp et al. propose a resource allocation algorithm in multiuser OFDMA by considering queue and channel state information. However, these methods focus on maximizing the overall throughput by improving spectral efficiency, which may lead to unfair resource sharing among users. In fact, fairness is necessary to guarantee minimum performance of the users experiencing bad channel conditions. The buffer-aware adaptive resource allocation proposed for LTE system in this paper will consider the problem of keeping certain fairness while improving the total system throughput. Due to the limited available resource, RBs allocation aims to efficiently use the shared resource and allocate the resource in a fair manner. Naturally, there is a tradeoff between fairness and system throughput. PF algorithm has emerged as a prominent candidate since it balances between fairness and throughput. In [19], S. Lee proposes a sub-optimal method, i.e., PF metric(2), which introduces the status of queues into PF metric(1). However, it is pointed out in [19] that although PF metric(2) is more responsive to the queues than PF metric(1), it incurs a reduced system throughput because its isolated RB assignment strategy may assign the RB to a user having low channel quality. Similar work related to PF scheduling in LTE systems can also be found in [20, 21] and [22]. By considering both fairness and the constraint of finite buffer space, a channel-adapted and buffer-aware (CABA) packet scheduling algorithm is proposed in [23]. This method defines and applies the user priority in the resource allocation for avoiding buffer overflow. However, the empirical parameters in the priority function are hard to appropriately choose. Inappropriate parameters will lead to an inaccurate user priority, which induces excessive resource allocated to the users and reduce the utilization of the system resource. The eNodeB may have large capacity to cache traffic such as audio and video streams, but it substantially increases the delay and reduces QoS. Hence, we consider the finite buffer size and queue overflow probability in this paper. We will jointly exploit the priorities of user queues and the RBs capacity for controlling the service rate of each user data queue in the base station, instead of solely relying on any one of them. Under the constraint of finite buffer space, the proposed buffer-aware adaptive resource scheduling algorithm aims at achieving three objectives: (1) keep bit loss rate as low as possible by means of taking buffer status into account, (2) improve the total system throughput as large as possible, and (3) keep certain fairness among users by means of adjusting the overflow probability. In this paper, we proposed a buffer-aware adaptive resource allocation scheme for LTE downlink transmission by jointly exploiting the priorities of user queues and RBs capacity. The proposed problem is formulated as improving the total system throughput subject to providing QoS guarantee for different users while keeping certain fairness. Specifically, our main contributions are listed as follows: User scheduling: Firstly, the user scheduling scheme is considering the finite buffer size. Secondly, the scheme is depending on the users' queue priority which is calculated by their remaining life time or their queue overflow probability. The overflow probability estimation model is derived by applying the large deviation principle [24], which incorporates both the queue fullness and its variation. RBs allocation: An online measurement-based algorithm is further presented to adjust the service rate of the user queues, which requires no statistical knowledge of the network conditions. According to the user queues' priorities, we control the service rate of each user queue by dynamically allocating the RBs, in order to avoid queue overflow and provide statistic QoS guarantee. We present experimental results to show that the proposed algorithm is able to improve the total system throughput while guaranteeing certain fairness among users and providing QoS guarantee. The rest of the paper is organized as follows: Section 'System model and problem statement' describes a system and channel model for resource allocation. In section 'User priority determination scheme', we present user priority determination algorithm, including calculating the remaining life time or queue overflow probability. Section 'Online measurement-based algorithm for dynamic RBs allocation' is devoted to describing the online measurement-based algorithm for dynamic resource allocation. Section 'Performance evaluation' provides the experimental results and performance comparisons. Finally, conclusions are drawn in Section 'Conclusion'. System model and problem statement We consider LTE system architecture with downlink RBs allocation as shown in Fig. 1. The eNode B (eNB) controls the bit service rate through dynamically allocating RBs to users. The total number of the data bits within a RB is referred to as RB capacity. The better channel condition of an RB implies a higher achievable RB capacity. Different RBs may have distinct channel conditions [20]. The smallest resource unit that can be allocated to a user is a scheduling block (SB), which consists of two consecutive RBs [25, 26]. In each time slot, several SBs may be allocated to a single user, but each SB is uniquely assigned to a user. We focus on single-cell downlink resource allocation in eNB of LTE system employing OFDMA. The implementation of adaptive resource scheduling in eNB relies on the following factors: buffer status (e.g., unoccupied buffer space and current queue length), traffic characteristics (e.g., bit arrival rate) and channel quality. We assume that eNB has perfect and instant channel information for all downlink transmissions via the feedback channel, while the channel quality is assumed stationary for the duration of each subframe, but may vary from subframe to subframe. Since the data queue of each user locates in eNB, it is natural that eNB knows the amount of each user's data in the transmission-side buffer without additional signaling to report. Let K and N, respectively, denote the number of the users and the number of all SBs. In the practical situation, eNB does not differentiate the transmitting data types. Hence, one user is assumed to have a single queue. Then, the kth queue length can be updated as $$ Q_{k}(t+1)=max\{Q_{k}(t)-V_{k}(t),0\}+A_{k}(t), $$ where A k (t)∈A={0,1,…,m A } denotes the bit arrival number of the kth user queue during the slot t, and m A is the maximum number of bits arriving in a single slot. Here, we assume that the bit arrival process of the kth user queue A k (t) to be an i.i.d sequence. V k (t)∈V={0,1,…,m V } represents the bit number transmitted during the slot t, where m V is the maximum number of bits served in a single slot. Define Q k (t) as the length of the kth queue in terms of bits at the beginning of the slot t. Here, we consider the practical scenario of finite buffer space and integrate the buffer status into the scheduling decision to decrease the bit loss rate. It is noted that buffer overflow implies the resource is not enough for transmitting the data in the queue, and the data has to be dropped when the buffer is full, while buffer underflow means that the resource is sufficient for conveying the data in the queue, and data loss will not occur. Hence, we only consider the buffer overflow in the transmitter-side (eNB-side). Let us define a threshold \(Q_{k}^{max}\) as the the maximum length of the kth user queue. If the kth queue length is higher than \(Q_{k}^{max}\), it implies that an excessive number of bits is buffered, and bit loss may be likely to occur. Naturally, any queue length exceeding \(Q_{k}^{max}\) is undesirable. Hence, the problem may be described as that of selecting an appropriate service rate to keep each queue length lower than \(Q_{k}^{max}\). The kth user queueing system model is shown in Fig. 2. When the arrival rate is larger than the service rate, bit loss may occur due to the queue overflow. In order to reduce the average bit loss rate (BLR), we plan to apply the large deviation algorithm to calculate the queue overflow probability. Accordingly, we define overflow probability of the kth queue as $$ P_{k_{overflow}}=P\left(Q_{k}(t)> Q_{k}^{max}\right). $$ Queueing system model For a given slot t, the increase of the kth queue length is characterized by $$ I_{k}(t)=A_{k}(t)-V_{k}(t). $$ We define remaining life time R k (t) to denote the remaining time of the kth user queue to be fullness. Then the remaining life time of the kth user at the slot t can be calculated by $$ R_{k}(t) ={ {Q_{k}^{max}-Q_{k}(t)}\over E[I_{k}(t)] }. $$ However, the distribution of I k (t) is not available. Hence, we use the sample mean of queue variations in the last N slots to estimate E[I k (t)], i.e., \({ {1 \over N} \sum _{t_{0}=t-N-1}^{t-1} {(A_{k}(t_{0})-V_{k}(t_{0}))}} \). Then, the remaining life time of the kth user at the slot t can be calculated by $$ R_{k}(t) ={ {Q_{k}^{max}-Q_{k}(t)}\over { {1 \over N} \sum_{t_{0}=t-N-1}^{t-1} {(A_{k}(t_{0})-V_{k}(t_{0}))}} }. $$ This paper aims for maximizing the throughput while reducing BLR and keeping certain fairness among users. In order to achieve this, we jointly consider the users' queue priority and RBs capacity for controlling the service rate of each data queue. The users' queue priority is based on the remaining life time or the queue overflow probability which is calculated by applying the large deviation principle. Then, according to the queue priority, we adjust the service rate of each user queue through dynamically allocating the RBs, in turn providing different transmission rate to achieve statistic QoS guarantee. Channel quality indicator (CQI) reporting procedure is a fundamental feature of LTE networks since it enables the estimation of the downlink channel quality at the eNB [27]. UE reports a CQI value of each RB to the eNB, and the eNB uses CQI for the resource allocation [28]. Let \({r_{k}^{n}}\left (t\right)\) denote instantaneous data transmission rate when the nth SB is assigned to the kth user queue at the slot t. According to CQI information, \({r_{k}^{n}}\left (t\right)\) can be calculated using the AMC module or simply estimated via the well-known Shannon's formula for the channel capacity [27], i.e., $$ {r_{k}^{n}}(t) = {log}_{2} \left(1+\gamma_{k,n}\right). $$ where γ k,n is the signal-to-interference-plus-noise-ratio (SINR) for the kth user on the nth SB. Let us define \({x_{k}^{n}}\left (t\right)\) to indicate whether the nth SB is assigned to the kth user at the slot t. If the nth SB is assigned to the kth user at the slot t, we have \({x_{k}^{n}}(t)\)=1. Otherwise \({x_{k}^{n}}(t)\)=0. Then, the resource allocation problem can be defined as improving the system throughput as large as possible, i.e., $$ \max\sum_{k=1}^{K} \sum_{n=1}^{N} {{x_{k}^{n}}(t)} {r_{k}^{n}}(t), $$ In this paper, we apply Jain's fairness index [29], F(t), to indicate the fairness to denote the system fairness at time t. The formula of Jain's fairness index are given as (33) and (34) in the section of Performance evaluation. The constraints are listed as follows: $$ \sum_{k=1}^{K} {x_{k}^{n}}(t)= 1, n\in \lbrace1,2,\ldots,N\rbrace. $$ $$ 1-F(t)\leq \xi. $$ where ξ is a given threshold value of the fairness deviation. Equation (8) indicates that each SB can be only assigned to one user during the slot t. Equation (9) indicates that the difference between 1 and the system fairness should be kept less than ξ. The resource allocation problem (7–9) is complicated and intractable to obtain the optimal solution by exhaustive search. Here, we propose a buffer-aware adaptive resource allocation scheme which considers the different priorities of user queues and RB capacity, in order to achieve a better performance tradeoff of throughput, fairness and average BLR. User priority determination scheme In order to improve the total system throughput and QoS for different users while keeping certain fairness, we need to determine the users' queue priority for deriving an online measurement based resource allocation. The overflow probability estimation model is derived by applying large deviation principle, which requires no statistical knowledge of the network conditions. Then, we rank the users' queue priority by their remaining life time or their queue overflow probability. Estimation model for the queue overflow probability The arrival rate of the incoming bits depends on the service type, while the service rate depends on the resource allocation policy as well as the wireless channel conditions which are time-varying in nature. Hence, the arrival process and the service process are independent of each other. Our aim is to control all user queues in such a way that the service demands of the data in each queue could be satisfied. Moreover, the resulted scheme should be robust to the variations of the arrival and service processes. Let I k (t)=A k (t)−V k (t), where I k (t)∈{−m V ,⋯,0,1,⋯,m A }, and let \({\pi _{i}^{k}}=P(I_{k}(t)=i)\) denote the corresponding kth user queue-length variation probability distribution. Since A k (t) is determined by the bit arrival number during the slot t and V k (t) is determined by the bit number served during the slot t, their difference I k (t) characterizes the mismatch between the bit service rate and the bit arrival rate of the kth user queue during the slot t. I k (t)<0 implies that the bit service rate is higher than the arrival rate in the tth slot, while I k (t)>0 implies that the bit service rate cannot satisfy the bit arrival. Due to the time-varying number of bit arrivals and the state of SBs, the polarity of the sequence I k (t)(t=1,2,…) may change frequently between negative and positive. The kth user queue length increment during the period spanning from the tth slot to the (t+T)th slot can be formulated as $$ I_{k}^{t+T}=\sum_{i=1}^{T} I_{k}(t+i), $$ where T is called prediction interval. Then, the length of the kth user queue at the beginning of the (t+T)th slot can be expressed as $$ Q_{k}(t+T)= Q_{k}(t)+ I_{k}^{t+T}. $$ Let \(P_{k_{\textit {overflow}}}^{t+T}\) denote the overflow probability of the kth user queue during the slot (t+T), which is defined as $$ P_{k_{overflow}}^{t+T}=P\left(Q_{k}(t+T)> Q_{k}^{max}\right). $$ The above expression can be rewritten as $$ P_{k_{overflow}}^{t+T}=P\left(Q_{k}(t)+ I_{k}^{t+T} > Q_{k}^{max}\right). $$ Define the achievable average queue growth of the kth user queue during the future T slots as $$ g_{k}=\frac{Q_{k}^{max}-Q_{k}(t)}{T}, $$ and the expected average queue growth of the kth user queue in each slot during the T slots as $$ c_{k}=E\left[\frac{\sum_{i=1}^{T} I_{k}(t+i)}{T}\right]. $$ where E[·] denotes expectation operator. c k >g k implies that there is high overflow possibility of the kth user queue after T slots. Equation (12) can be further written as $$\begin{array}{@{}rcl@{}} P_{k_{overflow}}^{t+T} &=& P\left(Q_{k}(t)+ I_{k}^{t+T} > Q_{k}^{max}\right)\\ &=& P\left({I_{k}^{t+T}/T}> {({Q_{k}^{max}-Q_{k}(t)})/T}\right)\\ &=& p\left({{\sum_{i=1}^{T} I_{k}(t+i)}\over T} > g_{k}\right). \end{array} $$ The term \(\frac {\sum _{i=1}^{T} I_{k}(t+i)}{T}\) in (16) is determined by the bits departure or the resource allocation, while g k is determined by the current length of the kth user queue. Since the queue overflow probability indicates the mismatch between the resource and the traffic, we can dynamically rank the users' queue priority based on the value of \(P_{k_{\textit {overflow}}}^{t+T}\). The larger value of \(P_{k_{\textit {overflow}}}^{t+T}\) means that queue overflow is more likely to occur and the corresponding user queue should have the higher priority of resource allocation, thus reducing the bit loss rate and satisfying QoS requirement. This is why the proposed method jointly considers RB capacity and the queue priority. \(Cram \acute e r's\) Theorem in the context of large deviation principle can be applied to estimate the overflow probability in [30]. Since A k (t) is an i.i.d process, I k (t)(t=1,2,…) are also i.i.d random variables with a finite moment generating function \(\phantom {\dot {i}\!}G(\theta)=E \lbrace e^{\theta I_{k}(t)}\rbrace \). According to \(Cram \acute e r's\) Theorem [31], if c k <g k , the sequence I k (t)(t=1,2,…) obeys the large deviation principle, and we have $$ {\lim}_{T \rightarrow \infty} \frac{1}{T} log P \left(\frac{\sum_{i=1}^{T} I_{k}(t+i)}{T} > g_{k}\right)= -l(g_{k}). $$ $$ l(g_{k})={{sup}_{\theta>0}} \lbrace g_{k} \theta - log G(\theta)\rbrace. $$ $$ log G(\theta)=log \left\{\sum_{i=-m_{V}}^{m_{A}}{\pi_{i}^{k}} e^{i \theta}\right\}. $$ Note that l o g G(θ) is a convex function, and the rate function l(g k ) is also convex [31]. For a sufficiently large value of T, according to (17) the overflow probability can be approximated by $$ P_{k_{overflow}}^{t+T} \approx e^{-T l(g_{k})}. $$ In theory, the overflow probability estimate becomes more accurate as T increases. Hence, the value of T should be sufficiently large. However, owing to the rapid exponential decay of the overflow probability estimate with T, we can set T to a moderate value for the sake of acquiring an accurate overflow probability estimate. The experimental results in the section of 'Performance evaluation' demonstrate that T≥60 is appropriate. In the next section, we show how to online estimate the overflow probability based on (20). Online estimation of the queue overflow probability According to (20), estimating the overflow probability requires the values of g k , c k , and \({\pi _{i}^{k}}\). It is easy to calculate g k according to (14). However, we have to estimate c k and \({\pi _{i}^{k}}\) because there is no prior knowledge about I k (t). Therefore, the historical observations are utilized to estimate these parameters by applying a sliding window-based method. Suppose the observed sequence is given by {I 1,I 2,I 3,⋯ }. The sliding window covers the T s most recent entries in this sequence, which is slid over this sequence. For the nth window, the observation vector is denoted by \(\phantom {\dot {i}\!}W_{n}=[I_{n}, I_{n-1}, I_{n-2}, \cdots I_{n-T_{s}+1}]\). For the parameter c k , we use the sample mean as its estimate, i.e., $$ \hat c_{k}={{\sum_{i={n-T_{s}+1}}^{n} I_{k}(i)}\over {T_{s}}}. $$ Following the similar steps in [32], we can apply the large deviation principle to analyze the confidence interval of c k . Below, we will estimate \({\pi _{i}^{k}} (i\in \lbrace -m_{V}, \ldots,0,1, \ldots, m_{A} \rbrace)\). Let \({T_{i}^{k}}\) denote the number of I k (t)=i events during the T s slots, which can be calculated by $$ {T_{i}^{k}}=\sum_{t={n-T_{s}+1}}^{n} 1_{i}(I_{k}(t)). $$ where 1 i (·) is a indicator function. When I k (t)=i, it has a value of 1, otherwise 0. Then, the frequency of I k (t)=i can be estimated as $$ \hat {u_{i}^{k}}(t)={{T_{i}^{k}} \over T_{s}}. $$ If the value of T s is too small, it may result in a large estimate error of \( \hat {u_{i}^{k}}(t)\), while too large, it may reduce the sensitivity to queue variations. Hence, T s should be set to a moderate value. We set it to 60 in our experiments. We apply an exponential smoothing method to smoothen the estimated value, which is written as $$ \hat {\pi_{i}^{k}}(t)= \rho \hat {\pi_{i}^{k}}(t-1)+(1- \rho) \hat {u_{i}^{k}}(t). $$ where the parameter ρ∈[0,1]. If ρ approaches to 1, the value of \(\hat {\pi _{i}^{k}}(t)\) largely depends on the past estimation, while if ρ=0, \(\hat {\pi _{i}^{k}}(t)\) totally depends on the current estimate \( \hat {u_{i}^{k}}(t)\). According to Gardner's report [33], ρ∈[0.7,0.9] is usually recommended. The above steps assist us to derive the online measurement-based method to estimate the queue overflow probability \(P_{k_{\textit {overflow}}}^{t+T}\) based on (20) in the (t+T) slot, by setting T to a moderate value in a practical application. The experimental results show that T≥60 is appropriate. User priority determination algorithm In the case of \(\hat c_{k}\geq g_{k}\), the average growth length of the kth user queue in each slot, \(\hat c_{k}\), is higher than the achievable average growth length of the kth user queue per slot, g k , in the forthcoming T slots. This implies that if keeping the current queue configuration unchanged with the bit service rate V k (t), after T slots, the queue will be more likely to have an overflow situation. Therefore, in this scenario, we should improve the bit service rate to prolong the remaining life time. In this paper, remaining life time R k (t) can be calculated by (5) to rank the queue priority of resource allocation. However, in the case of \(\hat c_{k}< g_{k}\), the average growth length of the kth user queue in each slot, \(\hat c_{k}\), is lower than the achievable average growth length of the kth user queue, g k , in the forthcoming T slots. But this does not necessarily imply that no overflow will happen in the future T slots, since \(\hat c_{k}\) is the average growth per slot which cannot characterize the specific queue length growth in a single time slot. Hence, the queue overflow might still occur. Since we have \(g_{k}> \hat c_{k}\), the queue overflow remains a rare event, and the queue overflow probability in the (t+T) slot, \(P_{k_{\textit {overflow}}}^{t+T}\), can be approximated by (20). The buffer-aware priority determination algorithm determines a priority value for each user, where the user in the case of \(\hat c_{k}\geq g_{k}\) is more emergent than in the case of \(\hat c_{k}< g_{k}\). The smallest value of R k (t) indicates the highest priority of the kth user. The value in ascending order represents that the users' priority is from high to low. The smaller value is \(P_{k_{\textit {overflow}}}^{t+T}\), the lower priority is the kth user. The value in descending order indicates that the users' priority is from high to low. According to the user queues' different priorities, in the next section, we show how to dynamically allocate the RBs to adjust the service rate for each user queue for improving the system throughput subject to providing QoS guarantee while keeping a certain fairness. Online measurement-based algorithm for dynamic RBs allocation In this section, we will present the proposed online estimation based dynamic service rate control algorithm, which relies on a strategy of mitigating the overflow probability or extending the remaining life time. Suppose there are K user queues indexed by the set Φ={1,2,…K} and N SBs indexed by the set Ω={1,2,…N}. For any k∈Φ, we calculate the value of R k (t) and \(P_{k_{\textit {overflow}}}^{t+T}\) in the case of \(\hat c_{k}\geq g_{k}\) and \(\hat c_{k}< g_{k}\), respectively. Then, the resource allocation strategy operates as follows: In the proposed buffer-aware resource allocation scheme, at the slot t we seek the user $$ k_{1}=\arg \min_{k\in \Phi }\lbrace R_{k}(t) \rbrace. $$ The SB having the maximum SINR can be obtained by $$ n_{1}=\arg \max_{\textit{n}\in\Omega} \lbrace \gamma_{n,k_{1}}\rbrace. $$ Then, we can calculate the transmission rate \(r_{k_{1}}^{n_{1}}(t)\) according to (6). If \(A_{k_{1}}(t)>r_{k_{1}}^{n_{1}}(t)\), it means that allocating SB is not enough to transmit the bits in the buffer for the most emergent user queue. Let Ω=Ω∖{n 1} (which means removing the element n 1 from the set Ω), then we choose the SB \(n_{2}=\arg \max _{\textit {n}\in \Omega } \lbrace \gamma _{n,k_{1}} \rbrace \) and calculate the transmission rate \(r_{k_{1}}^{n_{2}}(t)\). Compare the value of \(A_{k_{1}}(t)\) with the value of \(r_{k_{1}}^{n_{1}}(t)+r_{k_{1}}^{n_{2}}(t)\). If \(A_{k_{1}}(t) \leq r_{k_{1}}^{n_{1}} (t)+r_{k_{1}}^{n_{2}}(t)\), execute the step 4. Otherwise, let Ω=Ω∖{n 2}. Choose the SB \(n_{3}=\arg \max _{\textit {n}\in \Omega } \lbrace \gamma _{n,k_{1}} \rbrace \) and calculate the transmission rate \(r_{k_{1}}^{n_{3}}(t)\). Compare the value of \(A_{k_{1}}(t)\) with the value of \(r_{k_{1}}^{n_{1}}(t)+r_{k_{1}}^{n_{2}}(t)+ r_{k_{1}}^{n_{3}}(t)\), and repeat the above procedure until \(A_{k_{1}}(t) \leq r_{k_{1}}^{n_{1}}(t)+r_{k_{1}}^{n_{2}}(t)+ \cdots + r_{k_{1}}^{n_{m}}(t)\). If \(r_{k_{1}}^{n_{1}}(t)+r_{k_{1}}^{n_{2}}(t)+ \cdots + r_{k_{1}}^{n_{m}}(t) \leq Q_{k_{1}}(t)+A_{k_{1}}(t) \), the bit number transmitted in the slot t can be calculated as $$ \begin{aligned} V_{k_{1}}(t)=r_{k_{1}}^{n_{1}}(t)+r_{k_{1}}^{n_{2}}(t)+ \cdots + r_{k_{1}}^{n_{m}}(t). \end{aligned} $$ Otherwise \(r_{k_{1}}^{n_{1}}(t)+r_{k_{1}}^{n_{2}}(t)+ \cdots + r_{k_{1}}^{n_{m}}(t) > Q_{k_{1}}(t)+ A_{k_{1}}(t) \), the bit number transmitted in the slot t can be calculated as $$ \begin{aligned} V_{k_{1}}(t)=Q_{k_{1}}(t)+A_{k_{1}}(t). \end{aligned} $$ Then, let Φ=Φ∖{k 1}, Ω=Ω∖{n m }, we seek the user k 2= arg mink∈Φ{R k (t)}, by repeating the procedures 2, 3, and 4, and allocate several SBs for transmitting the data of the k 2th user. Repeat the procedure 5 until all the users which have the value of R k (t) have been allocated with the SBs. After that, we further allocate SBs to the users who have the value of \(P_{k_{\textit {overflow}}}^{t+T}\). We choose the user $$ k_{l}=\arg \max_{k\in \Phi }\left\lbrace P_{k_{overflow}}^{t+T} \right\rbrace. $$ and repeat the similar procedures 2, 3, 4, and 5 to allocate the resource and schedule users until Ω=∅. If Ω≠∅ and Φ=∅, it means that there are RBs which have not be used. In order to make the best utilization of RBs, we choose the users who have V k (t)<Q k (t)+A k (t) and constitute a new user set \(\bar {\Phi }\). We seek the user $$ k_{w}=\arg \max_{k\in \bar{\Phi}}\lbrace Q_{k}(t)+A_{k}(t)- V_{k}(t) \rbrace. $$ Then the remaining SB with the maximum SINR can be obtained via $$ n_{w}=\arg \max_{\textit{n}\in\Omega} \lbrace \gamma_{n,k_{w}}\rbrace. $$ According to (6), we can calculate the transmission rate \(r_{k_{w}}^{n_{w}}(t)\). If \(Q_{k_{w}}(t)+A_{k_{w}}(t)- V_{k_{w}}(t)>r_{k_{w}}^{n_{w}}(t)\), it means that the number of allocated SB is not enough to transmit the remaining bits in the buffer. Let Ω=Ω∖{n w }, we choose the SB \(n_{w_{1}}=\arg \max _{\textit {n}\in \Omega } \lbrace \gamma _{n,k_{w}} \rbrace \) and calculate the transmission rate \(r_{k_{w}}^{n_{w_{1}}}(t)\). Compare the value of \(Q_{k_{w}}(t)+A_{k_{w}}(t)- V_{k_{w}}(t)\) with the value of \(r_{k_{w}}^{n_{w}}(t)+r_{k_{w}}^{n_{w_{1}}}(t)\phantom {\dot {i}\!}\), if \(Q_{k_{w}}(t)+A_{k_{w}}(t)- V_{k_{w}}(t) \leq r_{k_{w}}^{n_{w}}(t)+r_{k_{w}}^{n_{w_{1}}}(t)\phantom {\dot {i}\!}\), we allocate the SBs \(\phantom {\dot {i}\!}n_{w}, n_{w_{1}}\) to the k w th user, if not, let Ω=Ω∖{n 2}, choose the SB \(n_{w_{3}}=\arg \max _{\textit {n}\in \Omega } \lbrace \gamma _{n,k_{w}} \rbrace \) and calculate the transmission rate \(r_{k_{w}}^{n_{w_{3}}}(t)\). Repeat the above procedure, choose the SBs by using the same method until \(Q_{k_{w}}(t)+A_{k_{w}}(t)- V_{k_{w}}(t) \leq r_{k_{w}}^{n_{w}}(t)+ r_{k_{w}}^{n_{w_{1}}}(t)+ \cdots + r_{k_{w}}^{n_{w_{w}}}(t)\). Accordingly, we allocate the SBs \(\phantom {\dot {i}\!}{n_{w}, n_{w_{1}}, \cdots, n_{w_{w}}}\) to the k w th user. Let \(\bar {\Phi }=\bar {\Phi } \backslash \lbrace k_{w} \rbrace \), Ω=Ω∖{n w }, and seek the user \( k_{w_{1}}=\arg \max _{k\in \bar {\Phi }}\lbrace Q_{k}(t)+A_{k}(t)- V_{k}(t) \rbrace \). Repeat the procedures 8, 9, and 10 until Ω=∅ or \(\bar {\Phi }=\emptyset \). After allocating SBs to the users in the slot t, we update the values of \(\hat c_{k}\) and g k corresponding to all the users in the slot t+1. Apply the user priority determination algorithm in the section III to rank the users' queue priority again, and then repeat the above all procedure. The workflow of the proposed method is illustrated in Fig. 3. The proposed method will use the observations of buffer fullness, data arrival rate A k (t), and CQI feedback from user equipments (UEs) to calculate the users' queue priority based on their remaining life time R k (t) or their queue overflow probability \(P_{k_{\textit {overflow}}}^{t+T}\). Once the users' queue priority has been determined, the dynamic RBs allocation algorithm based on online measurement is applied to adjust the service rate V k (t) by making decisions \({x_{k}^{n}}(t)\). Then, the decisions \({x_{k}^{n}}(t)\) are forwarded to eNB Scheduler to execute the resource scheduling. A buffer-aware adaptive resource allocation algorithm for LTE downlink transmission The algorithm operates at every beginning of the scheduling interval. The detail of the strategy is presented in Algorithm 1. In this section, we characterize the performance of our online measurement-based adaptive resource allocation algorithm, and provide performance comparisons with other five algorithms, namely CABA algorithm [23], PF metric(1) algorithm, PF metric(2) algorithm [19], MaxWeight-Alg(3) [17], and IHRR algorithm [18]. We first describe the simulation setup, and then the metrics used for performance evaluation are presented. Experiment setup We simulated a multiuser scenario, where the maximum number of communicating users was set to K=10,30,50. Here, the bit arrival rate for each user is assumed to obey the Poisson distribution with λ=50 k b i t/m s. CQI is discretized into 15 levels which correspond to 15 different pairs of modulation choice and code rate. This implies that there are 15 possible transmission rates. A mapping between SINR ranges and CQIs is presented in [34]. The CQI values are used together with the number of allocated RBs to determine the transmission rates. To evaluate the performance of the proposed dynamic resource allocation, we define three metrics as follows: Average bit loss rate: This metric indicates QoS of K users. It is defined as time average bit loss rate during a period of Δ, i.e., $$ \bar{C_{k}}={1 \over {\Delta+1}} {\sum_{t=T_{0}}^{T_{0}+\Delta}{ D_{k}(t) \over A_{k}(t)} }. $$ where D k (t) denotes the number of bit loss during the slot t for the kth user. Obviously, smaller \(\bar {C}\) is preferred. Fairness: This metric is measured using Jain's fairness index [29], which is widely applied for evaluating the system fairness. It is described as follows $$ F(t)=\frac{\left(\sum_{k=1}^{K} V_{k}(t)\right)^{2}}{K \sum_{k=1}^{K} {V_{k}^{2}}(t)}, $$ where F(t) denotes the fairness at time t. Then, the system fairness can be calculated according to $$ F={1 \over {\Delta+1}} {\sum_{t=T_{0}}^{T_{0}+\Delta} F(t)}. $$ Average throughput: Our aim is improving the system throughput subject to providing QoS guarantee for different users. The larger average system throughput implies better performance. All the simulation results were averaged over 50 independent runs. Performance comparison for different user index We used Matlab for implementing the simulations. The simulation model is based on the 3GPP LTE system model and it has a single cell with downlink transmission, where the number of RBs is 50, the carrier frequency is 2 GHz, and the system bandwidth is 10 MHz. Following the similar steps in [32], we can applying the large deviation principle to analyze the confidence interval of \(\hat c_{k}\). The corresponding simulation parameters are listed in Table 1. The prediction interval T=60, the sliding window length T s =60, the forgetting factor ρ=0.7, and the average channel SINR of 15 dB. In order to simplify the calculation, we set \( Q_{k}^{max}= Q^{max}= 3 \times 10^{4}~bit\). In Fig. 4, we plotted the average BLR of ten users for all the resource allocation schemes. The X axis denotes the user index. It can be seen that the proposed algorithm achieves the best performance with average BLR of 2.12×10−3, which is lower than those of PF metric(1) (about 2.57×10−3), CABA (about 2.29×10−3), IHRR (about 2.14×10−3), and PF metric(2) (about 2.13×10−3). MaxWeight-Alg(3) may perform unfair resource sharing among users. Hence, the curve of the average BLR for MaxWeight-Alg(3) is unstable compared with other algorithms. While for the proposed algorithm, we calculate the priority for each user queue by the remaining life time or queue overflow probability, which is applied to allocate RBs. As a result, it helps to reduce the overflow probability of the queue with highest priority. Thus, it achieves a lower value of the average BLR. The average BLR for different user In Fig. 5, we show the average throughput corresponding to ten users for different resource allocation algorithms. It can been seen that the average throughput for each user in the proposed algorithm significantly outperforms CABA, PF metric(1), PF metric(2), and IHRR. The reason for this is that adaptive resource allocation with the queue priority considers both the buffer status and the RBs capacity, thus improving all users' transmission rate and keeping a high fairness among all users. By contrast, PF metric(1) does not consider the queue length at all, which lead to the lowest performance. PF metric(2) suffers from the isolated RB assignment strategy, and thus it fails to improve the system throughput. For CABA, the weighted factor in the priority function may influence the performance. Considering that the user priority determination in IHRR has some limitation for resource allocation. MaxWeight-Alg(3) performs better than the other algorithms, but it does not consider the fairness. The average throughput for different user Performance at different SINRs This section investigates the performance of the proposed algorithm and other compared algorithms under different channel SINR conditions. In the simulation, the average channel SINR recorded varies from 11 to 20 dB with a step-size of 1 dB. The other parameters and simulation settings were the same as those in Section 'Performance comparison for different user index'. The average BLR for all users is calculated by $$ \bar{C}={1 \over K} \sum_{k=1}^{k=K} \bar{C_{k}}. $$ The average BLR versus the average channel SINR for these resource allocation schemes with ten users were plotted in Fig. 6. As shown in Fig. 6, the average BLR of the proposed method decreases as the value of SINR increases. The reason is that, at low SINR region, the bit service rate is not sufficient, and the current queue may have a shorter remaining life time or a larger overflow probability, which induces a large number of bits lost. As the average SINR increases, the bit service rate is increasing. Thus, the remaining life time is prolonged as well as the overflow probability decreases, which may reduce the average BLR. Compared with other algorithms, the proposed algorithm achieves the lowest BLR. The reason is that other algorithms fail to consider the priorities of user queues based on the buffer status and the RBs capacity. The average BLR for different average channel SINR Figure 7 shows the fairness of the proposed algorithm, CABA, PF metric(1), PF metric(2), MaxWeight-Alg(3), and IHRR. We applied Jain's fairness index in the simulation. It is shown that the fairness index of the proposed algorithm is the highest among these algorithms, which is approximately 0.998. This indicates that the queue priority assists the proposed algorithm to balance the resource allocation among the users, thereby achieving certain throughput fairness. From Fig. 7, we can see that the proposed algorithm may be insensitive to the value of the average SINR, but other algorithms undergo a relatively large variation for the different average SINRs. This also implies that their performance is subject to the channel quality. The fairness for different average channel SINR Figure 8 shows the average system throughput for different average channel SINR for the different resource allocation schemes with ten users. As shown in Fig. 8, the average system throughput increases upon increasing the average SINR. The results demonstrate that MaxWeight-Alg(3) performs better than the other strategies in terms of the overall throughput, but it has a lowest fairness level as shown in Fig. 7. For the rest algorithms, the proposed algorithm performs the better. The reason is that by choosing the appropriate RBs for the user queues according to their priority, the system throughput is improved. Combining the results of Figs. 7 and 8, it can be concluded that compared to the other methods, the proposed method both improves the fairness and the system throughput. This essentially benefits from the technique of queue priority applied in the proposed method. The average throughput for different average channel SINR Performance for different number of users This section investigates the performance of the proposed algorithm and other benchmark algorithms for different number of users. In the simulation, the number of users K were chosen in the range [10, 50]. The other parameters and simulation settings were the same as those in Section 'Performance comparison for different user index'. Figure 9 shows that the average BLR decreases as the number of users increases for the different resource allocation schemes. This reason is that the same amount of resources has to be shared among a higher number of candidates, which implies that with the increasing number of users, there is a much higher probability of bit loss. From Fig. 9, it can be observed that the average BLR of the proposed algorithm is the lowest among these algorithms, which maintains a small and steady growth trend with the increasing number of users in the cell. Since PF metric(1) does not consider buffer fullness, and PF metric(2), MaxWeight-Alg(3), IHRR as well as CABA fail to characterize the priorities of user data queues based on the buffer status and the RBs capacity, they have a higher average BLR than the proposed algorithm. The average BLR for different number of users Figure 10 shows that the fairness index for the different resource allocation schemes decreases as the number of users increases. The fairness index of the proposed algorithm is the highest among these algorithms, which means that it provides high fairness regardless of the user in the cell. The reason for this is that the queue priority based on the buffer status and the RBs capacity assists the proposed algorithm to balance the resource allocation among the users. The algorithm having the worst fairness index is MaxWeight-Alg(3); the reason is that it aims to maximize the overall system throughput, rather than the throughput of a single user. The fairness for different number of users Figure 11 shows that the average user throughput for all strategies decreases as the number of users increases. This result is natural because a higher number of candidates are sharing the same amount of resources. MaxWeight-Alg(3) results in the highest throughput, followed by the proposed, PF metric(2), CABA, IHRR, and PF metric(1). From Figs. 9, 10, and 11, it can be concluded that compared to the other methods, the proposed method both improves the fairness and the average user throughput, at the same time reduces the average BLR. The average throughput for different number of users Effect of prediction interval (T) In this section, we carried out an experiment in order to investigate the effect of the prediction interval by setting T=20,40,60,80,100. The other parameters and the simulation settings were the same as those in Section 'Performance comparison for different user index'. Figures 12 and 13 show the simulation results for the different prediction intervals. Observe in Fig. 12 that as the prediction interval duration increases, the average BLR decays rapidly. Although the prediction interval, T, should be sufficiently large according to the large deviation approximation in (20), the simulation results show that a choice T≥60 allows the proposed algorithm to achieve a reduced bit loss rate. Figure 13 shows that the average throughput increases upon increasing the interval T. This is because as T increases, the queue overflow probability estimate becomes more accurate, which results in more accurate resource allocation for achieving a higher average throughput. The average BLR for different prediction intervals (T) The average throughput for different prediction intervals (T) Effect of buffer size (Q max) In this section, we carried out an experiment in order to analyze the different performance obtained by changing the buffer size Q max from (0.5×104) bit to (5×104) bit with a step-size of (0.5×104) bit. The other parameters and the simulation settings were the same as thosein Section 'Performance comparison for different user index'. The simulation results were plotted in Figs. 14 and 15. The average BLR for different buffer size (Q max) The average throughput for different buffer size (Q max) From Fig. 14, we can see that the average BLR decreased rapidly as the buffer size increased from (0.5×104) bit to (2×104) bit. This mean that too small buffer size is more likely to incur queue overflow and bit loss. As Q max continues to increase, the average BLR reduces slowly. The reason is that larger capacity of the buffer has a lower probability of buffer overflow. Figure 14 shows that the proposed algorithm outperform the other methods in terms of average BLR. This benefits from the application of the user queue's priority calculated by the remaining life time or queue overflow probability. We also observe from Fig. 15 that the average system throughput for all strategies is improved by increasing the buffer size. The proposed algorithm performs better than other algorithms except for MaxWeight-Alg(3). The reason is that increasing the buffer size decreases the queue overflow probabilities. The proposed algorithm chooses the appropriate RBs for the user queues according to their remaining life time or queue overflow probability, and the system throughput is improved. From Figs. 14 and 15, we can concluded that compared with other algorithms, the proposed method reduces the average BLR and improves the average system throughput as increasing the buffer size. In this paper, we jointly consider user queue priority and the RBs capacity to develop a buffer-aware adaptive resource allocation scheme in LTE transmission systems. Under the constraint of finite buffer space, the proposed scheme aims for improving both the overall system throughput and the statistic QoS while keeping certain fairness among users. We derived an analytical formula based on the large deviation principle invoked for estimating the overflow probability as a function of the buffer variance. Also, the remaining life time of a queue was defined, and its estimation model was presented. Both the queue overflow probability and remaining life time were applied to determine the queue priority. According to the queue priority, an online measurement based algorithm was proposed to schedule RBs for adjusting the service rate of the user queues. The proposed algorithm does not rely on any prior knowledge about network conditions. Numerical results show that compared to traditional scheduling schemes, the proposed algorithm has a better tradeoff among throughput, fairness, and QoS. It improves the average system throughput and keeps a better fairness among users, while reducing the average BLR. It should be pointed out that this paper considered all the traffic at the eNodeB. However, the emerging technology of SDN and middle deep packet inspection (DPI) boxes can be applied to identify the traffic. Hence, we will consider the application aware scheduling in our future work with the aid of SDN and DPI. A Toskala, H Holma, K Pajukoski, E Tiirola, in Proceedings of IEEE 17th International Symposium on Personal, Indoor and Mobile Radio Communications. UTRAN long term evolution in 3gpp (Helsinki, 2006), pp. 1–5. WY Yeo, SH Moon, JH Kim, Uplink scheduling and adjacent-channel coupling loss analysis for TD-LTE deployment. Sci. World J. 2014, 1–15 (2014). P Phunchongharn, E Hossain, DI Kim, Resource allocation for device-to-device communications underlaying LTE-advanced networks. IEEE Wirel. Commun. 20(4), 91–100 (2013). Y Peng, SM Armour, JP McGeehan, An investigation of dynamic subcarrier allocation in mimo–ofdma systems. IEEE Trans. Veh. Technol. 56(5), 2990–3005 (2007). M Katoozian, K Navaie, H Yanikomeroglu, Utility-based adaptive radio resource allocation in ofdm wireless networks with traffic prioritization. IEEE Trans. Wireless Commun. 8, 66–71 (2009). J Huang, Z Niu, in Proceedings of IEEE Wireless Communication and Networking Conferenc (WCNC). Buffer-aware and traffic-dependent packet scheduling in wireless ofdm networks (Hong Kong, 2007), pp. 1554–1558. IC Wong, O Oteri, W McCoy, Optimal resource allocation in uplink SC-FDMA systems. IEEE Trans. Wirel. Commun. 8(5), 2161–2165 (2009). HAM Ramli, R Basukala, K Sandrasegaran, R Patachaianand, in Proceedings of IEEE Malaysia International Conference on Communications (MICC). Performance of well known packet scheduling algorithms in the downlink 3GPP LTE system (Kuala Lumpur, 2009), pp. 815–820. I Bisio, M Marchese, The concept of fairness: definitions and use in bandwidth allocation applied to satellite environment. IEEE Aerosp. Electron. Syst. Mag. 29(3), 8–14 (2014). Z Zhang, Y He, EK Chong, in Proceedings of IEEE Wireless Communication and Networking Conferenc(WCNC), 2. Opportunistic downlink scheduling for multiuser OFDM systems (New Orleans, 2005), pp. 1206–1212. Z Diao, D Shen, VO Li, in Proceedings of IEEE Global Communicatios Conference(GLOBECOM), 6. CPLD-PGPS scheduling algorithm in wireless OFDM systems (Dallas, 2004), pp. 3732–3736. G Song, Y Li, Utility-based resource allocation and scheduling in OFDM-based wireless broadband networks. IEEE Commun. Mag. 43(12), 127–134 (2005). S Ryu, B Ryu, H Seo, M Shin, in Proceedings of IEEE International Conference on Communications (ICC), 4. Urgency and efficiency based packet scheduling algorithm for OFDMA wireless system (Seoul, 2005), pp. 2779–2785. A Ahmedin, K Pandit, D Ghosal, A Ghosh, in Proceedings of the Conference on Emerging Networking EXperiments and Technologies (CoNEXT) student workshop. Content and buffer aware scheduling for video delivery over lte (Santa Barbara, 2013), pp. 43–46. T-Y Huang, R Johari, N McKeown, M Trunnell, M Watson, in Proceedings of the ACM Conference on SIGCOMM. A buffer-based approach to rate adaptation: evidence from a large video streaming service (Chicago, 2014), pp. 187–198. A Dua, N Bambos, in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM). Buffer management for wireless media streaming (Washington, 2007), pp. 5226–5230. M Andrews, L Zhang, Scheduling algorithms for multicarrier wireless data systems. IEEE Trans. Netw. 19(2), 447–455 (2011). M Realp, R Knopp, AI Perez-Neira, in Proceedings of IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2. Resource allocation in wideband wireless systems (Berlin, 2005), pp. 852–856. S Lee, Swap-based frequency-domain packet scheduling algorithm for small-queue condition in OFDMA. IEEE Commun. Lett. 17, 1028–1031 (2013). IF Chao, CS Chiou, in Proceedings of IEEE Wireless Communications and Networking Conference (WCNC). An enhanced proportional fair scheduling algorithm to maximize QoS traffic in downlink OFDMA systems (Shanghai, 2013), pp. 239–243. B Yang, K Niu, Z He, W Xu, Y Huang, in Proceedings of IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). Improved proportional fair scheduling algorithm in LTE uplink with single-user MIMO transmission (London, 2013), pp. 1789–1793. MR Sabagh, M Dianati, MA Imran, R Tafazolli, in Proceedings of IEEE International Conference on Communications (ICC). A heuristic energy efficient scheduling scheme for VoIP in 3GPP LTE networks (Budapest, 2013), pp. 413–418. L Yan, GY Yue, in Proceedings of IEEE International Conference on Wireless Communications, Networking and Mobile Computing (WICOM). Channel-adapted and buffer-aware packet scheduling in LTE wireless communication system (Dalian, 2008), pp. 1–4. JA Bucklew, Large Deviation Techniques in Decision, Simulation, and Estimation (2007). KI Pedersen, TE Kolding, F Frederiksen, IZ Kovács, D Laselva, PE Mogensen, An overview of downlink radio resource management for UTRAN long-term evolution. IEEE Commun. Mag. 47(7), 86–93 (2009). 3GPP, Further advancements for E-UTRA physical layer aspects (release 9). 3GPP TR 36.814, (2010). F Capozzi, G Piro, LA Grieco, G Boggia, P Camarda, Downlink packet scheduling in LTE cellular networks: Key design issues and a survey. IEEE Commun. Surv. Tutorials. 15(2), 678–700 (2013). Y Timner, J Pettersson, H Hannu, M Wang, I Johansson, in Proceedings of the 2014 ACM SIGCOMM Workshop on Capacity Sharing Workshop. Network assisted rate adaptation for conversational video over LTE, concept and performance evaluation (Chicago, 2014), pp. 45–50. A Bin Sediq, RH Gohary, H Yanikomeroglu, in Proceedings of IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). Optimal tradeoff between efficiency and jain's fairness index in resource allocation (Sydney, 2012), pp. 577–583. JS Chase, DC Anderson, PN Thakar, AM Vahdat, RP Doyle, in ACM SIGOPS Operating Systems Review, 35. Managing energy and server resources in hosting centers, (2001), pp. 678–700. M Mandjes, Large Deviations for Gaussian Queues: Modelling Communication Networks (John Wiley & Sons, West Sussex, 2007). C Budianu, L Tong, in Proceedings of IEEE International Conference on Acoustics, Speech and Signal (ICASSP), 2. Good-turing estimation of the number of operating sensors: a large deviations analysis (Quebec, 2004), pp. ii–1029. ES Gardner, Exponential smoothing: the state of the art. J. forecasting. 4(1), 1–28 (1985). 3GPP, Physical layer procedures (release 9). 3GPP TS 36.214, 6.2.0, (2010). University of Science and Technology of China, Hefei, China Ruiyi Zhu & Jian Yang Search for Ruiyi Zhu in: Search for Jian Yang in: Correspondence to Jian Yang. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Zhu, R., Yang, J. Buffer-aware adaptive resource allocation scheme in LTE transmission systems. J Wireless Com Network 2015, 176 (2015) doi:10.1186/s13638-015-0398-y Dynamic resource allocation Large deviation principle
CommonCrawl
Detection of illicit cryptomining using network metadata Michele Russo ORCID: orcid.org/0000-0001-6876-27981, Nedim Šrndić1 & Pavel Laskov2 EURASIP Journal on Information Security volume 2021, Article number: 11 (2021) Cite this article Illicit cryptocurrency mining has become one of the prevalent methods for monetization of computer security incidents. In this attack, victims' computing resources are abused to mine cryptocurrency for the benefit of attackers. The most popular illicitly mined digital coin is Monero as it provides strong anonymity and is efficiently mined on CPUs.Illicit mining crucially relies on communication between compromised systems and remote mining pools using the de facto standard protocol Stratum. While prior research primarily focused on endpoint-based detection of in-browser mining, in this paper, we address network-based detection of cryptomining malware in general. We propose XMR-Ray, a machine learning detector using novel features based on reconstructing the Stratum protocol from raw NetFlow records. Our detector is trained offline using only mining traffic and does not require privacy-sensitive normal network traffic, which facilitates its adoption and integration.In our experiments, XMR-Ray attained 98.94% detection rate at 0.05% false alarm rate, outperforming the closest competitor. Our evaluation furthermore demonstrates that it reliably detects previously unseen mining pools, is robust against common obfuscation techniques such as encryption and proxies, and is applicable to mining in the browser or by compiled binaries. Finally, by deploying our detector in a large university network, we show its effectiveness in protecting real-world systems. At the end of 2017, the cryptocurrency market reached a market capitalization of over $600 billion [1]. Since then, the market has demonstrated its interest in this technology, and cryptocurrencies have proven to be a revolutionary asset class. In May 2021, another surge of the cryptocurrency market occurred, taking the market valuation to a record of over $2 trillion [2]. However, the potential financial gains attracted not only investors but also malicious actors. Security risks related to cryptocurrencies are diverse. Conventional hacking incidents at cryptocurrency exchanges (e.g., [3, 4]) led to vast financial losses and even, as in the case of MtGox, to bankruptcy. Other prominent incidents included manipulation of wallet addresses [5], exploitation of wallet software [6–8], and compromise of mining power exchanges such as NICEHASH [9]. In contrast to such jackpot-style incidents, illicit mining of digital coins [10] is a far more reliable source of income for criminals. Various sources have reported a dramatic rise in cryptocurrency mining malware in the last 2 years. The year 2018 began with a surge in mining attacks [11], the trend rising in the entire year [12], during which time the number of cryptomining malware samples grew by more than 4000% [13]. In the first quarter of 2019, the number of campaigns targeting victims' computers to mine cryptocurrencies reportedly increased by 29% [14], and the recovery in trading price starting from May 2019 has resulted in a spike in cryptomining malware operations [15, 16]. Finally, large-scale attacks targeting enterprises are still widespread in 2020 (e.g., [17–19]). Our work is focused on detecting mining of Monero cryptocurrency (abbr. XMR), the most prevalent one in the illicit cryptomining ecosystem due to its anonymity guarantees and the feasibility to use CPUs for mining [20]. Monero mining is typically performed by a group of miners (clients) connected to a common mining pool (server) which shares the workload and profit among miners. This arrangement boosts the chance for the entire pool to mine a new block and consequently obtain a reward for it, in turn maximizing the long-term expected gain for individual miners. The de facto standard application layer protocol for communication between pool servers and clients is Stratum [21]. It has a minimalist syntax and uses simple logic for pool mining, which triggered our interest to investigate if it can be discriminated from other network traffic. Illicit cryptomining attacks can be categorized into two main families. In-browser cryptomining (also cryptojacking, browser-based mining) runs in the victim's web browser for as long as she stays on a web page with a mining script, e.g., CRYPTOLOOT [22]. Binary-based cryptomining malware is typically delivered via trojans which download and execute mining binaries as background processes. In both cases, by abusing computational power of hundreds of hijacked devices, attackers can amass significant computational power and generate substabtial earnings. Section 2 presents further details on "benign" and "malicious" mining. From the victims' perspective, illicit cryptomining incurs electricity and cloud costs, and causes devices to slow down and deteriorate rapidly, to the point of potentially being physically destroyed [23]. Crucially, it should be recognized as a breach indicator and promptly rectified to prevent further serious damage, e.g., confidential data theft. Due to these security implications, illicit cryptomining has received growing attention in the research community, cf. Section 3. The majority of previous work focused on understanding and assessing profitability of in-browser mining [24–27]; other work discussed its detection and mitigation [28–31]. Much less attention has been paid to binary-based cryptomining, comprising essentially the early investigation of Bitcoin mining botnets [32] and the comprehensive study of Monero mining malware [20]. In terms of detection techniques, the majority of prior work spotted mining on the endpoints by monitoring CPU or GPU usage and process metrics (processor time, system calls, number of threads, etc.), or in the network using simple indicators of compromise like domain and IP blacklists or IPS rules. While endpoint-based approaches are appealing in their capability to identify salient features of cryptomining, their deployment is labor-intensive. Network-based detection, in contrast, can be localized on critical network nodes, but existing approaches cannot cope with proxies and encryption. In this work, we demonstrate that essential properties of cryptomining activity can be accurately recovered from aggregated network traffic, i.e., NetFlow records, thus combining the advantages of both approaches. The unique capabilities of our proposed detector XMR-RAY stem from key design decisions elaborated in Section 5.1. It inspects network traffic metadata, i.e., NetFlow records (see example in Table 1). To alleviate the information loss in comparison to a full traffic dump, we introduce our principal innovation: a novel set of features based on constraint-solving which indicate whether a set of NetFlow records corresponding to a TCP session exhibits unique statistical regularities characteristic to Stratum. Using our novel features, Stratum traffic can be discriminated from other traffic by deploying a one-class classifier trained solely on mining traffic obtained from legitimate mining pool clients. As a result, we obtain a highly efficient, privacy-preserving detector resistant to encryption, tunneling and proxies, and outperforming detectors based on DPI. Its design and implementation are presented along with data collection in Section 5. Table 1 NetFlow records corresponding to the traffic window depicted in Fig. 2 In Section 6, we show that XMR-RAY, trained once on legitimate cleartext mining traffic in our lab, successfully generalizes to a variety of use cases. First, we evaluate it in a controlled environment with traffic collected from a large corporate network (Section 5.3). Next, we assess its robustness against encryption as well as tunneling and apply it to both in-browser and binary-based mining traffic. We also show that it successfully detects traffic generated by mining malware. Finally, we deploy it in a large university network and assess its false alarm rate. The main contributions of this work are: We design XMR-Ray, a machine learning system for detection of cryptocurrency mining. We propose a novel set of features for machine learning based on a form of constraint-solving which indicate whether NetFlow records corresponding to a TCP stream exhibit unique statistical regularities characteristic to Stratum. We evaluate the detection performance of the proposed solution on a comprehensive collection of real-world traffic and compare it to the closest related prior work. We assess the system's robustness against encryption, tunneling, proxying, and adversarial machine learning. We assess the false-positive rate of our system in a real-world deployment in a large university. Following the experimental evaluation, we discuss the main limitations of XMR-Ray in Section 7 and conclude the paper. Mining is a fundamental operation for cryptocurrency networks. Every transaction added to a public ledger must be verified to avoid over- or double-spending. In the absence of a central authority, participants of the peer-to-peer network verify several transactions of their choice. Transactions are grouped into blocks that are chained together by applying cryptographic hash functions to the content of the new block as well as the hash values of previous blocks, thus creating a tamper-proof "block-chain" in the ledger. Two mechanisms ensure that each participant verifies transactions honestly. First, obtaining the right to permanently add a block to the ledger is made expensive. Hence, every participant needs to "invest" resources into verification and provide an easily verifiable proof-of-work (PoW) for each block. Second, honest work is adequately rewarded. Both mechanisms outlined above are implemented by mining which entails the contest for finding a padding to a block such that the hash function computed over the padded block evaluates to a value smaller than the given threshold. Cryptographic hash functions are irreversible; hence, the only way to solve this problem is to brute-force the padding. The first participant to find a suitable padding announces the solution to the network and receives a small value in cryptocurrency as a reward. The threshold, and hence the problem difficulty, is adjusted such that a solution is expected once in a certain time interval. The participants' chance of find a solution first is proportional to their share of computational resources within the network. For miners without specialized computing resources, pooling together is the only realistic chance to gain any reward. A mining pool server, connected to a cryptocurrency network, distributes tasks to pool participants (clients). When a client finds a solution, it reports it to the server, which in turn verifies it and broadcasts it to the cryptocurrency network. Once the solution is verified by the network, the server receives the reward, withholds a commission and distributes the rest among clients according to their work. A specialized application layer protocol called Stratum was developed for pooled mining. It is a line-based text protocol using TCP sockets and JSON-encoded messages. Although there is no official standard for Stratum and its implementations may differ for different currencies, its workflow remains largely the same across implementations. Mining pools typically implement the following operations using Stratum: Broadcast mining jobs to the pool participants Collect the participants' hashes/shares Look for the block reward Track the participants' contributions and distribute their rewards proportionally Pooled mining is particularly popular for ASIC-resistant currencies like Monero, Dashcoin, and AEON, as they use a PoW algorithm designed to be egalitarian and efficient to compute on CPUs. Malicious mining Illegitimate or malicious mining refers to mining performed on hijacked resources. For example, employees may mine on a corporate computing cluster, or malware may use the CPU of an infected endpoint for monetization. There exist two prevalent techniques for malicious mining. Binary-based cryptomining This type of mining is performed by malware delivered via spam, exploit kits, or trojans. Once a device is infected, it downloads and starts a mining binary (executable). Previous studies reveal that cryptomining malware usually is a variant of a legitimate open-source mining client with custom configuration parameters, e.g., hard-coded address of the miscreant's cryptocurrency wallet, address of the mining pool or proxy, etc [20]. Monero (abbrev. XMR) is the preferred currency used by cryptomining malware, and XMRIG [33] is a popular legitimate mining tool deployed in most illicit mining campaigns [20, 34]. Since 2017, the number of coin miners has surged, with almost 4 million new samples in the third quarter of 2018 alone [13]. While the mining functionality has remained stable, its delivery methods and operational mechanisms have evolved with the malware ecosystem. Fileless malware [35] spreads using a combination of PowerShell and the EternalBlue exploit [36]. Some samples terminate rival mining malware on victim hosts to maximize their resource use [37]. Ransomware enhanced with cryptocurrency mining capability decides in real time which of the two strategies is more lucrative [38]. In-browser mining This type of mining takes place in users' web browsers while they are visiting a web page with an active mining script. It is considered legitimate if the site owner obtains the user's consent and may be financially more attractive to the owner than web advertisement [39]. In-browser Monero mining was pioneered by COINHIVE [40], which provided an efficient JavaScript mining client for embedding into websites, with claims of achieving up to 65% of the hashing rate of binary-based mining. However, this type of mining is often abused by criminals who inject JavaScript mining scripts into vulnerable web sites. A number of such attacks has been reported using COINHIVE scripts on, e.g., government websites of the USA, the UK, and Australia [41]. Attackers also abused Google's DoubleClick service, making the affected web pages show legitimate advertisement while a web miner covertly performed mining in the background [42]. Another insidious attack vector targeted carrier-grade MikroTik routers vulnerable to remote login with authentication bypass [43]. In this case, attackers abused the routers' web proxy functionality to inject COINHIVE scripts into websites browsed by users who accessed the Web through them. Below, we present and categorize the most relevant work addressing the phenomenon of cryptojacking. For a comprehensive and detailed review of prior work, we refer the reader to [44]. In addition, we place this work in the broader context of threat detection and traffic classification. Detection of non-mining threats in network traffic While detection initially focused on network packets [45, 46], the rising link speeds made packet analysis infeasible and alternative approaches have emerged based on network aggregated flows, especially NetFlow [47]. Combined flow- and packet-based approaches were introduced for malware family classification [48, 49]. Pure flow-based systems have been successful in detecting malware-related activities [50–52] like botnets [53–55], DoS attacks [56–58], scans [59–61], and worms [62, 63]. Similarly, our work is based on NetFlow, but we introduce novel features specialized for mining detection. Traffic classification The problem addressed in our work may be seen as a special case of traffic classification, namely, identification of the Stratum protocol traffic. Hence, our method is related to the respective work on traffic classification. Conti et al. proposed a method for identifying user actions in Android applications based on information from TCP/IP packet metadata in encrypted traffic [64, 65]. Papadogiannaki et al. proposed a pattern language for describing IP packet sequences and use it for fine-grained identification of application events, also in encrypted traffic [66]. Traffic classification was also performed using first 5 packets of a TCP connection [67] or counting received packets and bytes [68]. The impact of traffic sampling on performance of detection systems was studied in [69]. We refer the interested reader to [70, 71] for an exhaustive literature survey in traffic analysis and classification. A considerable body of related work investigated the cryptojacking ecosystem. In 2014, Huang et al. reverse-engineered a large population of Bitcoin mining malware and analyzed its network traffic to study operational behavior, geographical distribution, and other features [32]. They identified coherent botnet campaigns, estimated their earnings and speculated about their payout strategies. A similar study has addressed the clandestine ecosystem of malicious Monero mining binaries [20]. Other longitudinal studies addressed in-browser cryptojacking, most prominently MINESWEEPER [28], presenting a new detection technique using static JavaScript analysis and monitoring CPU cache events during WebAssembly execution. An analysis of Alexa's top 1 million sites provided insights into campaigns' earnings estimates, distribution networks, pools, proxies, and services. Bijmans et al. [27] extend this work with a larger list of domains gathered from different sources. They analyze NetFlow records associated with mining services and assess their usage distribution. Other studies addressed similar questions, differing primarily in techniques for the detection of cryptojacking websites [24–26, 72–75]. Endpoint- and cloud-based detection Early work on detection addressed Bitcoin mining in cloud environments. Solanas et al. [76] proposed a method for detecting undesirable activities, including Bitcoin mining, by using privacy-friendly features extracted from OpenStack implementations. They use metrics such as CPU utilization, disk read/write request rates/bytes and network byte/packet rates to train a detector. A similar approach was taken in [77]. Analogously, hardware-assisted profiling has been used to create discernible signatures for various mining algorithm [78, 79]. Other related work relies on analysis of power consumption, network logs and web resources [80], hardware performance indicators [81, 82], or on a combination of both Windows performance counters and network flows features [83]. RAPID analyzes endpoint performance from the web browser to detect in-browser mining using machine learning in real time [29]. Naseem et al. convert Wasm binaries to gray-scale images and utilize a convolutional neural network-based classifier to label an image as either malicious (i.e., cryptojacking) or benign [84]. Cryptojacking was also detected by tracing opcodes during execution although not from the browser itself [31]. In the field of memory forensics, Ali et al. [85] proposed several techniques for extraction of key cryptojacking indicators, e.g., wallet addresses and Stratum protocol messages, from memory images. Side channels have also been successfully used for detection [86]. Detection of cryptojacking at the endpoints or in cloud infrastructure may appear straightforward since cryptojacking inherently incurs high CPU and memory usage. Nevertheless, it remains susceptible to false positives (processes with similar resource usage profiles) and evasion (attackers throttle the consumption to stay under the radar). Especially in enterprise environments, deployment and orchestration of endpoint detection may be very costly. Network-based detection Guidelines published in technical papers, e.g., [87], typically present simple network detection rules. Others, e.g., [88], list indicators of compromise (IoC) for Monero mining, e.g., pool domains, wallet addresses, C&C communication patterns. However, such techniques are routinely evaded by attackers. Academic research developed complementary ideas. Swedan et al. present a system for gateway-based traffic analysis which dissects HTTP(S) traffic to extract and analyze JavaScript in real time using heuristic rules [89]. This approach is nevertheless susceptible to evasion using JavaScript obfuscation and bears a substantial operational burden associated with HTTPS proxies. Several papers [90–93] rely on computing features upon packet flows and training binary classification machine learning models. They achieve high detection accuracy at the expenses of computation and deployment overhead. Motivated to achieve accurate detection and reduce cost, recent papers investigate using NetFlow [94, 95]. They propose binary classification to discriminate between mining and non-mining traffic, utilizing the C4.5 decision tree algorithm and generic NetFlow-based features known from earlier work [96]. However, binary classification complicates deployment, as the model needs to be trained from scratch in every environment ("train everywhere"). In contrast, our detector is trained once using only mining traffic and can subsequently be deployed in any environment without retraining because it does not depend on the highly diverse non-mining traffic ("train once, deploy everywhere"). Furthermore, our work introduces novel features highly specialized for mining detection, and we experimentally demonstrate its superior detection performance. Stratum traffic analysis In this section, we present the details of the Stratum pool mining protocol. To this end, we perform a manual analysis of full packet traces of Stratum traffic captured in our lab and show the most important findings. Our traffic collection effort is documented in Section 5.3. Communication over Stratum starts with the pool client (miner) contacting the pool server and performing authentication. As soon as the miner successfully logs into a mining pool, it starts receiving jobs over New Job messages which have the general format shown in Listing 1. In the examples, we show the format used by a specific combination of client-server tools, i.e., the mining pool mine.xmrpool.net and mining client XMRIG 2.6.4, but our algorithm is designed to be independent of the specific implementation and focuses on the Stratum workflow. The main properties of New Job messages areas follows: blob: an ASCII hexadecimal string (76 bytes/152 characters) representing the content to be hashed by miners in order to produce Monero coins. A portion of this string is editable and represents the nonce that can be arbitrarily set by miners. job_id: a string used to match jobs with their corresponding results; it has a variable length which depends on the pool implementation. target: in order for the block to be accepted by the pool, its header hash must be lower than or equal to the current target, thus lowering the target makes the hash computation more difficult. When a mining pool client receives a New Job message from the server, it starts the mining process. This involves finding a nonce such that the value generated by hashing the new blob, using the PoW hash function, is lower than the target. The information included in the New Job message is sufficient to define the problem. Once the miner finds a solution, it submits its result to the pool in the form of a Solution Submission message. An example for this type of message can be seen in Listing 2. It contains several fields, but 3 are relevant for our analysis. The job_id corresponds to the job which has been completed. The nonce (8 hexadecimal characters denoting a 4-byte value) has been found by the miner and used to produce a suitable hash. The hash value of the block header using the found nonce is returned as result. The mining pool server receives the Solution Submission message. After verifying the correctness of the embedded solution, it sends back a Submission Result message. An example is shown in Listing 3. This pattern of communication between the pool client and server, denoted in Fig. 1 as Unit, repeats until the network connection is terminated. Typical stratum mining workflow A detailed example of the communication between a miner and a Monero public pool over Stratum is shown in Fig. 2. The two subfigures show the TCP/Stratum packets sent by the pool and the miner, respectively, during the mining process. The plot is based on full-packet capture (DPI) for demonstration purposes, but our algorithm works on NetFlow data. As an example, NetFlow records corresponding to the window of network traffic between the dashed vertical lines are presented in Table 1. Example of stratum communication between a mining pool and client Miners may sometimes receive New Job messages from the pool while they are already working on a job. This happens for two reasons: either a new block has been mined or new transactions appeared for the current block. Therefore, the ratio between the number of jobs and number of the submitted solutions in a single NetFlow record pair is not one-to-one. This makes reconstructing Stratum semantics from NetFlow more difficult. By analyzing several mining traces using visualization tools, e.g., see Fig. 2, we discovered the following properties: The size of Stratum messages exchanged between a server and client remains nearly constant. New Job and Solution Submission are the largest messages sent by the pool and the miner, respectively. After Login, the miner generally sends 2 types of messages: Solution Submission and Keep-Alive. The number of TCP acknowledgments (ACK) sent by the miner, excluding keep-alives, is equal to the number of Submission Result and New Job messages sent by the pool. The number of ACKs sent by the pool is not equal to the number of Solution Submission. This happens because the pool often piggybacks the ACK flag for a Solution Submission in the Submission Result packet. These properties enable us to engineer Stratum-specific features for accurate detection of mining traffic. XMR-Ray detector This section presents the design and implementation of the cryptomining detector XMR-RAY, with a particular focus on design decisions and its novel features. Several features of the current cryptomining ecosystem are crucial for the design of our detector. First, the Stratum protocol is de facto standard. Furthermore, not only the protocol but also the few client implementations are largely shared among legitimate and malicious users. Finally, malicious mining operations are often carried out via legitimate public mining pools [20]; thus, the malicious actors are forced to use exactly the same client and protocol implementations as the said public pools. To take advantage of these findings, we made the following key design choices for XMR-RAY: It is based on network traffic inspection, not endpoint monitoring, It uses aggregate information, i.e., NetFlow, not DPI. It employs one-class, not binary classification. Its features are mining-specific. Tradeoffs of these decisions are summarized in Table 2, empty cells denote cases with no discernible effects. Table 2 Summary of benefits and drawbacks of XMR-RAY's design Installing and managing endpoint monitoring tools with comprehensive coverage is challenging in dynamic environments. Network-based detection, in contrast, can be centralized. However, existing simple network-based approaches, e.g., IP/domain blacklists and IPS rules, cannot cope with encryption. Consequently, XMR-RAY inspects NetFlow instead of packets, achieving several advantages while potentially reducing accuracy due to the information loss. The choice of machine learning approach is based on the premise that pool mining traffic can be discriminated from all other network communications. Therefore, we employ a one-class classifier (OCC) trained solely on mining traffic generated by generic legitimate mining pool clients. This is a clear competitive advantage over related work which employs binary classifiers, as they additionally require full network traffic for training. In essence, binary classifiers decide between two classes, while OCCs decide between the target class and everything else, i.e., "universe". If the target class is well characterized by representative training data and characteristic features, and the universe is very diverse and constantly changing—an assumption which is true in our case—then OCC is clearly a superior choice. The benefits are very significant, the only drawback is the information loss which may lead to lower accuracy. To alleviate information loss, the main innovation effort was invested in reconstructing as much information as possible from NetFlow records by designing Stratum-specific features. Section 6 demonstrates that this effort was very successful and that another crucial benefit was achieved this way: robustness against encryption, proxying, and tunneling. The cost is a minor increase of computation effort, already compensated by using NetFlow and OCC. Clearly, the benefits are numerous and very significant, especially for the last 2 design choices, which also represent competitive advantages compared to related work. However, the main strength is also the main weakness: by being Stratum-specific, the approach is not applicable to completely novel protocols. This is discussed from the perspective of adversarial machine learning in Section 6.6 and as a limitation in Section 7. Deployment scenario A typical deployment scenario for our detector is shown in Fig. 3, where XMR-RAY receives metadata of traffic passing through the network gateway. A network flow is a communication session between two applications described by the tuple (As,ps,Ad,pd,P), where As,Ad are source and destination IP addresses, ps,pd are the corresponding ports, and P is the IP protocol. Each direction in the communication is considered a separate, unidirectional flow. Typical deployment scenario for XMR-RAY NetFlow records are exported according to different timeout rules: Active timeout takes place upon expiration of the active timeout, enabling periodic export of flow statistics in long network conversations. Inactive timeout occurs if no packet is observed in a flow within a specified time interval. Flag-based timeout is activated when FIN and RST flags in TCP sessions indicate session termination. After NetFlow collection is enabled on a device, flow statistics are stored and updated in a cache. When a flow times out, its statistics are exported in form of a NetFlow record. The starting point of our mining investigation is a representative corpus of legitimate mining traffic collected using a mining server optimized for CPU and GPU mining. Such a traffic is largely machine-independent and thus the machine learning models' results are not biased even if the data was generated and collected from a single machine. It runs Ubuntu 18.04 and has an AMD RX 580 GPU and AMD Ryzen 7 2700x CPU. The primary mining tools were XMRIG for CPU and CLAYMORE/XMR-STAK for GPU mining, as these are well maintained and widely used both legitimately and in cryptomining malware campaigns [20]. Traffic was collected by running TCPDUMP locally on the mining server and converted to NetFlow setting both active and inactive timeouts to 100s. In our implementation, metadata is encoded using NetFlow v5, the most common and lightweight version, but alternative formats like IPFIX are also suitable. This procedure is used for training and evaluation in our experiments. To collect a comprehensive corpus of cleartext mining traffic, we mined in 25 well-known Monero public pools [97] for about 6 months, gathering around 4000 h of mining traffic. For the experimental evaluation, we also collected a corpus of NetFlow data (test dataset) from a large enterprise network (about 10k hosts). Like for mining traffic, export timeouts were set to 100s. It comprises around 500 million flows, of which there are 16,000 TCP conversations (565,000 flows) longer than 30 min and was collected for 1 month. This heterogeneous network environment is representative for large enterprises. We assume that the test data does not contain mining traffic and our manual investigation did not find any, even in our method's false positives. Finally, it is worth noting that all NetFlow records were collected with a sampling rate of 100%. This is standard practice among enterprises that for security purposes deploy dedicated NetFlow exporters. To enable the computation of performance metrics like true/false positive rate in our experiments, we inject subsets of mining conversations taken from our mining traffic collection into the test dataset. Specifically, we insert NetFlow records belonging to mining TCP conversations among the NetFlow records from the benign enterprise traffic. We carefully avoid using the injected mining traffic for training. The architecture of the XMR-RAY system is depicted in Fig. 4. It takes as input NetFlow records and operates in 2 modes: training and deployment. In training mode, NetFlow records from our database of collected mining traffic are used to train a machine learning model. In deployment mode, the trained model is used to classify NetFlow records resulting from TCP conversations of new, live network traffic as either mining or other traffic. In the following, we provide a detailed description of individual modules. TCP conversation reconstruction Stratum runs over TCP, and mining is carried out during long communication sessions which we denote as "conversations." NetFlow records are collected with a timeout that can be much shorter than the duration of a single conversation, e.g., 100 s. A single mining session therefore corresponds to many NetFlow records. The first step of our method is to reassemble unidirectional NetFlow records into bidirectional TCP conversations. All NetFlow records sharing the same TCP 4-tuple are grouped into a single TCP conversation. Traffic windowing by time In the second step, the previously reassembled TCP conversations are partitioned by time into overlapping successive windows, each window having the same time length of lW seconds. This is repeated every fW seconds. The overlap is due to the sliding window nature of splitting and lW≥fW. Keeping lW constant ensures that time-dependent features have a comparable value among all time windows. The sets of time-windowed TCP conversations produced in this step are the basic unit of processing for the machine learning model, i.e., the model's decisions are made based on NetFlow records collected within a time window of lW seconds from a single TCP conversation. In this step, the output of the previous module is prepared for processing by the machine learning algorithm. The feature extraction module computes 110 features capable of capturing the intrinsic network behavior of Stratum communications. The features are described in Section 5.5. The output are real-valued feature vectors that describe the salient properties of individual traffic windows. Feature extraction is the last module in the preprocessing subsystem. The preprocessing is performed in exactly the same way for both training and deployment. In the training mode, incoming feature vectors are used to train a one-class classification model. In this mode, only mining traffic from the training dataset is used as input. The model is trained to identify whether a set of NetFlow records corresponding to a time window of lW seconds from a single TCP conversation contains mining traffic or not. The output of the training step is a one-class classification model. In prediction mode, the model predicts whether windows of live traffic represent mining or not. During deployment, decisions can also be made on a coarser granularity of TCP conversations by applying some rules (e.g., majority voting) to the outcome of window-based prediction. In all but the real-world deployment experiment (see Section 6.8), we use window-based prediction to evaluate our method. This is intended to evaluate the worst-case performance in the absence of error correction mechanisms. A crucial contribution of this work is a novel set of features that describe characteristic traits of the Stratum protocol from the information aggregated in NetFlow records. To this end, we devised a novel feature extraction process dubbed "speculative reconstruction." The analysis of Stratum traffic that motivated our feature design is presented in Section 4. All features are computed for NetFlow records corresponding to individual traffic windows of duration lW seconds of TCP conversations. Group 1: Heuristics based on speculative reconstruction of Stratum The first group of features exploits the fact that Stratum messages have a very constrained format, which enables us to reconstruct characteristics of cleartext messages from aggregated values in NetFlow records. We dub this process "speculative" because the constraints we have identified do not always have a unique solution but nevertheless enable a good guess about the true sequence of Stratum messages. The process starts by analyzing each pair of corresponding unidirectional NetFlow records in a time-windowed TCP conversation, i.e., the NetFlow record from A to B and B to A. Our goal is to estimate, for each pair of records, the following quintuples: ACK packet size, Size of New Job messages Size of Solution Submission messages Number of New Job messages Number of Solution Submission messages To this end, we apply 10 intrinsic constraints imposed by TCP and Stratum, for example: ACK packet size in bytes ∈{40,44,…,80}. Size of Solution Submission messages is greater than the size of ACK packets by at least 72 bytes (nonce + result). Size of New Job messages is greater than the size of ACK packets by at least 152 bytes (blob size). Number of miner's packets is the sum of number of ACK packets, Keep-Alive and Solution Submission messages. Number of miner's ACK packets is the sum of the numbers of New Job and Submission Result messages. After filtering out Keep-Alive messages, we guess the ACK packet size and estimate the remaining four quantities so as to match the number of packets and bytes observed in NetFlow records. After discarding quintuples with non-integer values we are usually left with several plausible quintuples, i.e., solutions, for each pair of NetFlow records. In the second step, we group all quintuples obtained for a time-windowed TCP conversation by the first 3 dimensions (i.e., ACK packet size, size of New Job messages, size of Solution Submission messages) and count their occurrences. These 3 dimensions are imposed by TCP and Stratum configuration parameters of the pool and miner and they remain nearly constant for the duration of the session. The last 2 dimensions (i.e., number of New Job messages and number of Solution Submission messages) are variables reflecting the current workload of the miner. In the final step, we use the triplet counts from the second step to compute the following features for every window of lW seconds: Ratio of the largest triplet count to the total count of feasible triplets Difference between largest and smallest triplet count Standard deviation of triplet counts Ratio of New Job packet size to Solution Submission packet size for the most frequent triplet Standard deviation of New Job message counts across all NetFlow records for the most frequent triplet Standard deviation of Solution Submission message counts across all NetFlow records for the most frequent triplet. The feature sensitivity analysis of our classifier showed that these features play a major role in attaining high detection accuracy. In the following, we attempt to elucidate the speculative reconstruction on an example with an lW of 30 min. We use the same value for the experiments in Section 6. The essence of the speculative reconstruction process is solving a system of equations whose constraints are dictated primarily by Stratum semantics. As a result, in case of a mining conversation, a single solution of the system should frequently repeat across the conversation's NetFlow record pairs. On the other hand, no solution should dominate others in non-mining conversations. Features F1.1–F1.3 explicitly aim at capturing this characteristic. A high value for these features means that only one triplet (ACK packet size, size of New Job messages, size of Solution Submission messages) solves the equation system for most NetFlow record pairs. This is a strong indicator of mining traffic. Figure 5 shows the statistical distribution of features F1.1–F1.3 for approximately 15,000 mining and normal conversations of lW minutes. Although features computed on mining conversations show some variability in their values, their medians are several times higher than those of normal conversations. This demonstrates the validity of speculative reconstruction and the importance of features F1.1–F1.3 as indicators of mining traffic. Statistical distribution of F1.1, F1.2, and F1.3 Features F1.4–F1.6 characterize the average difficulty of jobs requested by specific pools. For non-mining traffic, they are likely to appear random. In addition, F1.6 captures the effects of the VARDIFF algorithm [98] employed by pools to update the miners' job difficulty based on their average time to finish past jobs. While speculative reconstruction features carry a strong signal for differentiating between mining and other traf- fic, alone they cannot achieve low false positive rate. We observed during feature development that the most similar traffic to Stratum was VoIP and protocols exhibiting polling or beaconing behaviors. These protocols share commonalities with Stratum, such as a near-constant transmission rate, long duration, and repeating messages. To finally distinguish between mining and these similar protocols, we extended our feature set with the following. Group 2: Correlation between mean packet sizes of the miner and the pool The ratio of byte count and packet count of a NetFlow record gives its mean packet size. For the mean packet sizes of 2 sequences of unidirectional NetFlow records of every time window, we compute different correlation metrics, e.g., Pearson correlation coefficient. This captures the intuition that a New Job message from the pool is unlikely to be immediately followed by a solution from the miner. In NetFlow terms, the pool's record has a high mean packet size (since New Job is the biggest Stratum message), and the corresponding miner's record a low one, because it acknowledges the New Job but does not deliver the solution in the same record. In the other direction, when the miner sends a Solution Submission it is unlikely that the pool has sent a New Job message in the previous seconds. Also, the pool needs to send a Submission Result and occasionally an ACK, which lowers the mean packet size since these packets are much smaller than New Job messages. Hence the two lists of mean packet sizes are anti-correlated for mining traffic and uncorrelated for non-mining traffic. Group 3: Correlation between packet count and mean packet size Features in this group are similar to group 2 but instead correlate the mean packet size and packet count of all NetFlow records with the same direction. We observe that a miner's Solution Submission triggers a response from the pool that the miner needs to ACK. Hence, a Solution Submission message boosts the number of packets sent by both sides, causing the mean packet size of the miner to raise (Solution Submission is miner's biggest packet) and of the pool to drop (Submission Result is small). Hence, the miner's packet count and mean packet size are correlated, the opposite is true for the pool, and no correlation should be observed for most other traffic. Group 4: Ratio of transmitted bytes For each unidirectional NetFlow record (A to B), we find the corresponding answer (B to A) and compute the ratio of bytes sent and received. The features represent the distribution of such ratios: range, interquartile range, mean, and standard deviation. Group 5: Mean packet size These features represent the distribution of mean packet sizes for NetFlow records with the same direction, comprising standard deviation, range, and interquartile range. Group 6: Transmission rate These features represent the distribution of the transmission rate (bytes/s) for NetFlow records with the same direction, comprising standard deviation, interquartile range and minimum rate. Feature groups 4–6 capture the intuition that mining communication has a finite set of possible packets exchanged, as well as relatively constant miner-pool communication, following a specific pattern. Finally, the above semantically motivated features are complemented with useful general features derived from number of sent bytes and mean packet inter-arrival time. One-class classification In machine learning, one-class classification (OCC) attempts to identify objects of a specific class by learning from a training set containing only objects of that class. The task in OCC is to define a classification boundary around the positive (target) class, such that it accepts as many target objects as possible, while minimizing the chance of accepting negative (outlier) objects. It is a one-vs-rest classification, where the rest are not observed during training. XMR-RAY uses the Isolation Forest OCC algorithm as implemented in SCIKIT-LEARN [99]. Its hyperparameters are tuned using grid search; the following values are chosen for all subsequent experiments: n_estimator=300, contamination=0.01, max_samples is set to the number of training samples. At an early stage in research, we evaluated other one-class classifiers, such as one-class SVM, but their results were inferior. Given a generally high accuracy of XMR-RAY, we did not study even more advanced classifiers, e.g., deep learning. In this section, we test XMR-RAY in diverse environments, with the goal of providing a comprehensive and realistic evaluation. For the detection in networks with thousands of hosts it is crucial to maximize detection rate (i.e., recall or true positive rate (TPR)) and minimize false alarm rate (i.e., false positive rate (FPR)): $$\text{TPR} = \frac{TP}{P}, \\ \text{FPR} = \frac{FP}{N}. $$ Above, TP represents the count of true alarms, P the total count of mining traffic windows, FP the count of false alarms, and N the total count of non-mining windows. A single pair of (TPR, FPR) values provides a very limited view of detector's performance. Most detectors have variable detection thresholds (e.g., the percentage of trees in a random forest which vote for one of the classes) which, when adjusted, change their decisions and therefore (TPR, FPR) rates as well. The receiver operating characteristic (ROC) curve provides a comprehensive view of detector's performance because it incorporates all (TPR, FPR) operating points of the detector for all values of its detection threshold (as a 2D curve). Throughout this section, we evaluate the detection performance using the area under ROC (AuROC), which provides a summary of the ROC curve. Its values lie in the range (0,1), the higher the better. Trade-off between latency and accuracy Our first experiment aims to find a good trade-off between detection latency and accuracy, as influenced by the window size lW, a parameter of our algorithm. Lowering its value reduces latency but also the amount of information available to make the decision. For each window size lW, we perform a variant of a 10-fold cross-validation. Our set of mining traffic comprises traffic collected from all 25 recorded mining pools (see Section 5.3). We divide the mining traffic into 10 different 80:20 splits. The 80% comprises the training set. The remaining 20% is mixed with our non-mining enterprise traffic collection (also described in Section 5.3) to serve as the test set. Figure 6 shows that the detection accuracy grows with lW, as expected. We choose lW = 1800s (30min), with TPR = (98.94 ± 0.34)% and FPR = (0.054 ± 0.027)%, and use this value in all subsequent experiments. The FPR is computed per time window (30min) of a TCP stream, rather than per flow record. In our test environment with about 3500 TCP conversations per day which exceed 30 min, XMR-RAY produced about 3 false positives per day. Detection performance as a function of window size lW Detection of novel mining pools Another crucial property of cryptomining detectors is their capability to detect traffic from miners that communicate with previously unknown mining pools. This is a challenging task since mining traffic can be, and in practice already is, easily customized. Indeed, the majority of well-known Monero public pools use existing customizable tools like NODE-CRYPTONOTE-POOL [100] and NODEJS-POOL [101]. To evaluate XMR-RAY's ability to detect novel pools, we design a new experiment. Similarly to the previous case, we use 10-fold cross-validation, but this time, we make sure that none of the pools in the test set have appeared in the training set. This experiment simulates real-world deployment where completely new pools appear regularly. We evaluate several ratios of numbers of pools used for training and testing, ranging from 7:18 to 22:3. Experiment results depicted in Fig. 7 show a clear advantage in using more pools for training and give a promise that model performance has potential to improve with time as more mining traffic is collected for training. Using about 10 pools for training suffices to achieve high TPR (96.20 ± 3.04)% on the remaining 15 previously unseen pools with an FPR of only (0.058 ± 0.071)%. Detection performance on novel mining pools Detection of malware mining traffic To verify that XMR-RAY detects mining by malware, we set up a CUCKOO sandbox [102] and executed two samples belonging to mining campaigns WEBCOBRA [103] and NRSMINER [104]. We collected around 30 h of traffic thus generating an insignificant profit for the threat actors. We noticed that the WEBCOBRA miner connected to an IP address, while NRSMINER used a DNS query. In both cases, the remote hosts may have been proxies or private pool servers, as they did not belong to any public Monero pools. Nevertheless, XMR-RAY correctly classified all 30-min (lW) time windows as mining. This experiment does not evaluate FPR by design, as the model is only applied to positive test samples. The result corroborates our hypothesis that there is no substantial difference between legitimate and illegitimate mining traffic. Detection in obfuscated network traffic Another experiment evaluates XMR-RAY's capability to correctly identify obfuscated mining traffic, i.e., using encryption or connecting through a tunnel, even when it was trained only on NetFlow records of cleartext mining traffic. Similarly to the previous experiment, in this section, we do not evaluate FPR by design, as the model is only applied to positive test samples. Encrypted mining Since the beginning of 2018, mining pools and tools increasingly provide the option to perform mining over an encrypted channel using SSL/TLS. In some cases, pools encourage the use of SSL by reducing fees for miners that do so [105]. In October 2018, XMRIG introduced support for SSL/TLS. To collect encrypted mining traffic, we mined for several hours with different mining tools in 11 out of 25 pools that supported SSL mining. All pools accepted one of the following ciphers proposed by the client: AES{128,256}-GCM-SHA{256,384} [106]. This choice is natural since AES-GCM has slowly replaced AES-CBC over the past years, becoming the most used encryption mechanism. Furthermore, TLS1.3 no longer supports CBC-mode ciphers since AES-GCM combines stronger security, lower traffic overhead and higher performance on modern hardware. Counter mode of operation is designed to turn block ciphers into stream ciphers; hence, we expected AES-GCM (Galois Counter Mode) to increase the packet size but keep the crucial properties and relations we observed in Stratum messages. And indeed, the model detected 98.16% of the 1000 evaluated 30-min windows of SSL mining traffic. Although this finding is not surprising given our feature design, it is nevertheless a remarkable accomplishment for a machine learning algorithm trained without access to encrypted traffic. In the next step, we modified the mining client to propose other cipher suites in the TLS handshake. The only remaining ones the pools accepted were AES{128,256}-CBC-SHA{128,256}. The detection rate was 99.35% on 250 h of encrypted mining traffic, thus the padding of the CBC mode of operation has a negligible influence on our features. From these results, we conclude that our model reliably detects encrypted mining traffic, due to two main reasons: Reliance on metadata instead of payload Custom feature design that reveals the intrinsic behavioral characteristics of the Stratum protocol. We are not aware of any use of SSL/TLS by cryptomining malware so far. However, encryption is becoming ever more widespread and it is not difficult to imagine its use for this purpose in the near future. By proposing a model resistant to evasion using encryption, we hope to be one step ahead of malicious actors. SOCKS proxy This is an SSH tunnel which applications use to forward their traffic to a proxy server that relays it to the final destination. The clients must use an SSH agent. For this experiment, we collected several hours of mining traffic through a SOCKS 5 proxy using ciphers CHACHA20-POLY1305, AES{128,192,256}-CTR, and AES{128,256}-GCM. To test a different cipher suite from the previous experiment, we force the SSH session to use the AEAD cipher CHACHA20-POLY1305. On 250 h of mining over an SSH tunnel, the detection rate was 89.16%. Mining proxy In the past few years, several botnets started mining, and the mining pools started to ban them under the pressure of the mining community. A common approach for spotting botnets is to count the number of IP addresses that connect using the same wallet address (e.g., mineXMR.com [107]). To avoid a ban and mask their wallets, attackers started using mining proxies like XMRIG-PROXY [108]. Such tools are similar to standard proxies but communicate over Stratum and manage a large number of miners. They reduce the number of connections to the pool, e.g., up to 256 times for XMRIG-PROXY. XMR-Ray has detected 90.08% of the 200 h of mining that we performed over XMRIG-PROXY. Detection of in-browser mining Previous longitudinal studies investigated the distribution of well-known cryptomining services in web pages [25, 27, 28]. Our collection included the top 3: COINHIVE, COINIMP [109], and CRYPTOLOOT [22]. These were found in around 75% of all websites with mining scripts within the Alexa Top 1 million list, COINHIVE alone accounted for 60%. We found the websites hosting these scripts via the search engine PublicWWW [110], connected to them to perform mining and collected the traffic. The scripts throttled the CPU usage down to 30% to evade detection. Table 3 shows that XMR-RAY reliably detects COINHIVE and CRYPTOLOOT. Similarly to the previous two experiments, we do not evaluate its FPR by design, as the model is only applied to positive test samples. We further investigate the reasons for a somewhat lower detection rate for COINIMP. Table 3 Detection results for in-browser mining During in-browser mining, clients communicate with the mining pool using WebSockets. All our collected examples used WebSockets over TLS. By collecting and decrypting a new session, we noticed COINIMP also used obfuscation. Looking at the distribution of packet sizes in the packet dump (before conversion to NetFlow), we noticed that COINIMP introduced variations to the Stratum workflow that reduced our detection rate. We thus conclude that the WebSocket proxy server alters the client-side protocol. On March 8, 2019, COINHIVE stopped its mining services. As a result, other mining services took advantage of Coinhive's absence and cryptojacking attacks are still widespread [111–113]. Robustness against adversarial evasion Machine learning algorithms are vulnerable to adversarial examples, i.e., data which is manipulated in order to evade a specific detector. We implement several software modifications for the XMRIG mining client as a preliminary evaluation of XMR-RAY's robustness against adversarial evasion. Our modifications are intended to trick our model into misclassifying mining TCP conversations as non-mining. Similarly to previous experiments, we do not evaluate FPR by design, as the model is only applied to positive test samples. Results of our 3 evasions are shown in Table 4. Table 4 Detection results for adversarial evasion Attack 1: Data injection Analyzing the open-source code of mining pools, we noticed that the JSON parser function extracts only the key-value pairs it is interested in and discards the others. We then add to each Stratum Solution Submission message a new key-value pair of random length between 32 and 512 bytes. This modification had little influence on the detection rate. Attack 2: Message injection If the JSON key method is missing from a client message, the pool does not send a Stratum reply but simply a TCP ACK. We make XMRIG send random-length messages at random intervals every 2–10s. This attack considerably reduces the detection rate. Modifying the number of packets and randomizing packet sizes disrupts our client-side features. Attack 3: Error triggering The last example is the one that, among the three, alters the mining traffic shape and behavior the most. We modified the mining client to send messages with a wrong value for the method JSON key. We noticed that certain public pools answer with an error message that contains the same ID of the received faulty message. Therefore, by sending messages with a wrong method and variable-length ID at random intervals, we can trigger variable-length error messages in response from the pool. Doing so, we manage to evade the detection around 50% of the time. This does not come as a surprise since a modification like this heavily affects the standard Stratum workflow and thus both our client- and pool-side features. This technique works only with pools that deploy the standard NODE-CRYPTONOTE-POOL. From our experiments, we conclude XMR-Ray can indeed be vulnerable to evasion attacks. However, for the attacks to work, the adversary must deviate from the standard Stratum protocol. Nevertheless, such deviations from the protocol can be both detected and sanctioned by security updates to pool software. Comparison to prior work To position our system among existing detectors, we compare it to its most closely related prior work, Muñoz et al. [94]. This work also follows a network-based approach using NetFlow but, in contrast to XMR-RAY, employing generic features and binary classification. We reimplemented it to the best of our knowledge and describe the technical details of our reimplementation in Appendix A. Figure 8 compares the detection performance of both approaches on detecting novel mining pools, as in Section 6.2. The results for XMR-RAY, repeated from Fig. 7, are clearly superb. In particular, using 10 training pools it achieved TPR = (96.20 ± 3.04)% and FPR = (0.058 ± 0.071)%, while the competitor had about 28x higher FPR = (1.65 ± 1.07)% at a lower detection rate of (94.96 ± 2.60)%. Comparison on detecting novel mining pools The detection rate of the approaches was compared across all experiments described in Sections 6.3-6.6, as summarized in Table 5. Malware is fully detected by both. Muñoz et al. is more robust against stream ciphers but a lot more vulnerable against CBC-mode encryption. Other experiments, most notably adversarial evasion, provide rich empirical evidence for the superiority of XMR-RAY. Given the boost of TPR and reduction of FPR, we conclude that XMR-RAY's specialized features and OCC approach provide a substantial improvement over the state of the art. Table 5 Competitive comparison Results from real-world deployment In the final stage of our research we deployed XMR-RAY with the trained model described in Section 6.1 in the network of a large university with around 100,000 endpoints. As a common procedure for practical deployments, we used an error-correction mechanism for the model's predictions which we refer to as majority voting. Our model was run every 3 hours, loading the past 3 hours of traffic, and labeling a TCP conversation as mining only if the majority of its traffic windows have been labeled as such. We configured the traffic windowing module to use an overlap of fW=lW/2=900s, thus having at most 11 traffic windows within 3 h. To assist security analysts in incident prioritization, we categorized each TCP conversation (not limited to 3 h) into low or high risk, depending on the count and ratio of its windows classified as mining. In 2 weeks of operation in the university network, XMR-RAY analyzed 2,219,410 TCP conversations longer than lW=1800 s and labeled 52 as mining: 5 with high and 47 with low risk. We manually investigated the positives to assess the detection performance. The 5 high-risk conversations were confirmed as Monero mining. Interestingly, only one host was mining by connecting directly to a well-known public pool (pool.supportxmr.com:7777), while the others were mining through proxies. Two of these proxies' IP addresses were associated with malware activity according to AbuseIPDB [114]. In addition, one was listed as IoC for a Chinese cryptomining hacking campaign targeting macOS users [115]. We were not able to confirm any of the 47 low-risk positives; thus, the system achieved a FPR of 0.0021%, i.e., 3.4 false positives per day. The risk categorization has helped the analysts focus their attention on legitimate threats. The experimental evaluation presented above demonstrates that the proposed method has clearly met the objectives of its design. In this section, we elucidate design limitations and discuss possible solutions for overcoming them, as well as further applications of our design ideas for NetFlow traffic analysis. One limitation of our design is its lack of support for UDP. No existing cryptomining implementations known to us use UDP, but an attacker could, in principle, tunnel Stratum over UDP. To close such evasion opportunity, unidirectional NetFlow features could be investigated that do not rely on any TCP communication patterns. Another feature of our design is its limitation to Stratum. If pools migrate to novel protocols, our method will become ineffective. However, we find this scenario to be unlikely for the near future. Stratum is widely deployed, proven, and reliable for its users, and we are not aware of plans to replace it. In case of cryptojackers, to avoid using Stratum, they would have to develop both a custom mining client and the pool which, on top of being costly, represents a single point of failure and brings maintenance overhead due to Monero hard forks. Although Stratum will be replaced at some point in the future, the same is true for all protocols, including HTTP and TLS, which went through major changes recently. And just like new detectors were necessary for HTTP/2 and TLS 1.3, researchers will need to make new detectors for future pool mining protocols, an evolutionary approach typical for security. Adversarial examples are a further threat to our technique, just like to all other machine learning approaches. As shown in Table 4, some hand-crafted obfuscations indeed reduce the detection accuracy. However, for the protocol deviations to succeed, they must be accepted by the pool. Legitimate public pools are incentivized to and do block deviating miners and we expect that our proposed attacks will be rendered ineffective as soon as they become deployed. Illegitimate pools are free to change the protocol, in which case the evasion attack is reduced to protocol modification attack which we addressed in the previous paragraph. An exciting direction for future work is deeper investigation of the speculative reconstruction technique. In the feature sensitivity analysis for our classifier (omitted due to lack of space) we have observed that the features obtained by this technique (group 1) play the major role in attaining high detection accuracy. It is therefore interesting to understand what other applications of network metadata analysis can benefit from this design technique but also whether it may negatively affect user privacy. The interconnection of various design decisions enabled by this technique requires a better understanding of its mathematical properties, especially the existence and uniqueness of solutions for different protocols. In this paper, we addressed the problem of detecting illegitimate cryptocurrency mining in network traffic. We demonstrated that the de facto standard protocol for pooled mining, Stratum, exhibits salient patterns which enable its reliable detection using NetFlow. The presented system XMR-RAY outperforms the closest competitors by combining one-class classification (OCC) with novel Stratum-specific features. OCC is trained solely on mining traffic generated by legitimate mining clients, while binary classifiers employed by the state of the art also require full non-mining network traffic. In case the target class is well characterized by representative training data and characteristic features, and the other classes are very diverse and constantly changing, an assumption which holds true for cryptocurrency mining, then OCC is a superior choice and provides major benefits in efficiency, privacy, costs, deployment and robustness. Our principal innovation is a set of features using constraint-solving to assess whether NetFlow records corresponding to a TCP stream are likely to originate from Stratum traffic. These features also provide another crucial benefit: robustness against encryption, proxying and tunneling. A comprehensive experimental evaluation in a large corporate environment demonstrated high effectiveness. With a detection latency of 30 min, XMR-RAY reached a detection accuracy of 98.94% at a false alarm rate of 0.05%, about 28 times lower than that of the closest competitor. We have also demonstrated its ability to detect traffic towards novel mining pools, as well as to most prominent cryptojacking services. Deployed in a large university network it successfully detected real-world mining attacks. Akin to other machine learning systems, our technique is not immune to adversarial examples. We discuss this and other limitations of our approach, and outline the prospects for using our novel feature extraction technique in other applications of network metadata analysis. To compare our detector to the state of the art, we reimplemented the detector described in [94]. The features are as follows: Inbound and outbound packets/second Inbound and outbound bytes/second Inbound and outbound bytes/packet Bytes_inbound/bytes_outbound ratio Packets_inbound/packets_outbound ratio We are confident about the quality of our reimplementation since, even if not fully described, the baseline features' names are self-explanatory and leave little room for interpretation. For a fair and accurate comparison, the goal was to evaluate both approaches following the same procedure. However, that is generally not possible because they are trained on different data: OCC only on mining while binary classifiers also require non-mining traffic. Therefore, for training binary classifiers, we resorted to another corpus of NetFlow data from the same enterprise environment where the evaluation was performed. To simulate a realistic deployment, the non-mining part of the binary classifiers' training traffic was collected a month before the test dataset described in Section 5.3. Crucially, the entire test dataset and the mining portion of the training dataset remains identical. We repeated the experiments of Sections 6.1-6.6 for 2 best models from [94]: C4.5 and CART. We used WEKA, specifically the PYTHON-WEKA-WRAPPER library [116], to implement the C4.5 algorithm and SCIKIT-LEARN for CART. In our environment, CART performed significantly better, and therefore, we compare only its results against ours. Due to the commercially and privacy-sensitive nature of the research, no data subjects consented to their data being retained or shared. A typesetting error caused by the incorrect XML tagging in the first paragraph of the Introduction section was fixed. AES: Advanced encryption standard American standard code for information interchange ASIC: Application-specific integrated circuit AuROC: Area under receiver operating characteristic Classification and regression trees CBC: Cipher block chaining Central processing unit CTR: FPR: False-positive rate GCM: Galois/counter mode Hypertext transfer protocol secure IoC: Indicator of compromise IPFIX: Internet protocol flow information export IPS: Intrusion prevention system JavaScript object notation OCC: PoW: ROC: Receiver operating characteristic SHA: Secure sockets layer Transport control protocol TPR: True-positive rate User datagram protocol XMR: Monero cryptocurrency S. Higgins, $600 billion: cryptocurrency market cap sets new record (2017). https://www.coindesk.com/600-billion-cryptocurrency-market-cap-sets-new-record/ Accessed 19 Apr 2021. CoinMarketCap, Global cryptocurrency charts - total market capitalization (2021). https://coinmarketcap.com/charts/ Accessed 17 May 2021. M. Yamazaki, Tokyo-based cryptocurrency exchange hacked, losing $530 million: NHK (2018). https://www.reuters.com/article/us-japan-cryptocurrency/tokyo-based-cryptocurrency-exchange-hacked-losing-530-million-nhk-idUSKBN1FF29C Accessed 21 Apr 2021. J. Russell, Korean crypto exchange Bithumb says it lost over $30M following a hack (2018). https://techcrunch.com/2018/06/19/korean-crypto-exchange-bithumb-says-it-lost-over-30m-following-a-hack/ Accessed 21 Apr 2021. M. Yuval, CoinDash TGE Hack findings report 15.11.17 (2017). https://blog.coindash.io/coi---tge-hack-findings-report-15-11-17-9657465192e1 Accessed 21 Apr 2021. E. Ananin, A. Semenchenko, Copy-pasting thief from a copy-pasted code (2018). https://www.fortinet.com/blog/threat-research/copy-pasting-thief-from-a-copy-pasted-code.html Accessed 21 Apr 2021. J. Wilmoth, Hackers seize $32 million in Ethereum in parity wallet breach (2017). https://www.ccn.com/hackers-seize-32-million-in-parity-wallet-breach/ Accessed 21 Apr 2021. S. Higgins, Tether claims $30 million in US dollar token stolen (2017). https://www.coindesk.com/tether-claims-30-million-stable-token-stolen-attacker Accessed 21 Apr 2021. R. Iyengar, More than $70 million stolen in bitcoin hack (2017). https://money.cnn.com/2017/12/07/technology/nicehash-bitcoin-theft-hacking/index.html Accessed 21 Apr 2021. C. Budd, Threat brief: malware authors mine Monero across the globe in a big way (2108). https://unit42.paloaltonetworks.com/threat-brief-malware-authors-mine-monero-across-globe-big-way/ Accessed 21 Apr 2021. E. Lopatin, Kaspersky Security Bulletin 2018 story of the year: miners (2018). https://securelist.com/kaspersky-security-bulletin-2018-story-of-the-year-miners/89096/Accessed 21 Apr 2021. European Union Agency For Network and Information Security, Enisa threat landscape report 2018. Technical report, ENISA (2019). https://www.enisa.europa.eu/publications/enisa-threat-landscape-report-2018. McAfee Labs, McAfee Labs Threats Report (2018). https://www.mcafee.com/enterprise/en-us/assets/reports/rp-quarterly-threats-dec-2018.pdf Accessed 21 Apr 2021. McAfee Labs, McAfee Labs Threats Report (2019). https://www.mcafee.com/enterprise/en-us/assets/reports/rp-quarterly-threats-aug-2019.pdf Accessed 21 Apr 2021. C. Cimpanu, Cryptomining malware saw new life over the summer as Monero value tripled (2019). https://www.zdnet.com/article/crypto-mining-malware-saw-new-life-over-the-summer-as-monero-value-tripled/ Accessed 21 Apr 2021. AMR, Kaspersky Security Bulletin 2019 (2019). https://securelist.com/kaspersky-security-bulletin-2019-statistics/95475 Accessed 21 Apr 2021. C. Cimpanu, Thousands of enterprise systems infected by new Blue Mockingbird malware gang (2020). https://www.zdnet.com/article/thousands-of-enterprise-systems-infected-by-new-blue-mockingbird-malware-gang/ Accessed 21 Apr 2021. J. Karasek, A. Remillano, Outlaw updates kit to kill older miner versions, targets more systems (2020). https://blog.trendmicro.com/trendlabs-security-intelligence/outlaw-updates-kit-to-kill-older-miner-versions-targets-more-systems/ Accessed 21 Apr 2021. A. Windsor, Breaking down a two-year run of Vivin's cryptominers (2020). https://blog.talosintelligence.com/2020/01/vivin-cryptomining-campaigns.html Accessed 21 Apr 2021. S. Pastrana, G. Suarez-Tangil, in Proceedings of the Internet Measurement Conference, IMC. A first look at the crypto-mining malware ecosystem: a decade of unrestricted wealth (ACMAmsterdam, 2019), pp. 73–86. https://doi.org/10.1145/3355369.3355576. BitcoinWiki, Stratum mining protocol (2020). https://en.bitcoinwiki.org/wiki/Stratum_mining_protocol Accessed 21 Apr 2021. Crypto-Loot, CryptoLoot earn more from your visitors (2020). https://crypto-loot.org/ Accessed 21 Apr 2021. N. Buchka, A. Kivva, D. Galov, Jack of all trades (2017). https://securelist.com/jack-of-all-trades/83470/ Accessed 21 Apr 2021. S. Eskandari, A. Leoutsarakos, T. Mursch, J. Clark, in IEEE European Symposium on Security and Privacy Workshops, EuroS&P Workshops. A first look at browser-based cryptojacking (IEEELondon, 2018), pp. 58–66. https://doi.org/10.1109/EuroSPW.2018.00014. J. Rüth, T. Zimmermann, K. Wolsing, O. Hohlfeld, in Proceedings of the Internet Measurement Conference IMC. Digging into browser-based crypto mining (ACMBoston, 2018), pp. 70–76. https://dl.acm.org/citation.cfm?id=3278539. G. Hong, Z. Yang, S. Yang, L. Zhang, Y. Nan, Z. Zhang, M. Yang, Y. Zhang, Z. Qian, H. Duan, in ACM SIGSAC Conference on Computer and Communications Security CCS, ed. by D. Lie, M. Mannan, M. Backes, and X. Wang. How you get shot in the back: a systematical study about cryptojacking in the real world (ACMToronto, 2018), pp. 1701–1713. https://doi.org/10.1145/3243734.3243840. H. L. J. Bijmans, T. M. Booij, C. Doerr, in 28th USENIX Security Symposium, USENIX Security, ed. by N. Heninger, P. Traynor. Inadvertently making cyber criminals rich: a comprehensive study of cryptojacking campaigns at internet scale (USENIX AssociationSanta Clara, 2019), pp. 1627–1644. https://www.usenix.org/conference/usenixsecurity19/presentation/bijmans. R. K. Konoth, E. Vineti, V. Moonsamy, M. Lindorfer, C. Kruegel, H. Bos, G. Vigna, in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS, ed. by D. Lie, M. Mannan, M. Backes, and X. Wang. MineSweeper: an in-depth look into drive-by cryptocurrency mining and its defense (ACMToronto, 2018), pp. 1714–1730. https://doi.org/10.1145/3243734.3243858. J. D. P. Rodriguez, J. Posegga, in Proceedings of the 34th Annual Computer Security Applications Conference ACSAC. RAPID: resource and API-based detection against in-browser miners (ACMSan Juan, 2018), pp. 313–326. https://doi.org/10.1145/3274694.3274735. W. Wang, B. Ferrell, X. Xu, K. W. Hamlen, S. Hao, in 23rd European Symposium on Research in Computer Security, 11099, ed. by J. López, J. Zhou, and M. Soriano. SEISMIC: SEcure In-lined Script Monitors for Interrupting Cryptojacks (SpringerBarcelona, 2018), pp. 122–142. https://doi.org/10.1007/978-3-319-98989-1_7. D. Carlin, P. O'Kane, S. Sezer, J. Burgess, in 16th Annual Conference on Privacy, Security and Trust, ed. by K. McLaughlin, A. A. Ghorbani, S. Sezer, R. Lu, L. Chen, R. H. Deng, P. Miller, S. Marsh, and J. R. C. Nurse. Detecting cryptomining using dynamic analysis (IEEE Computer SocietyBelfast, 2018), pp. 1–6. https://doi.org/10.1109/PST.2018.8514167. D. Y. Huang, H. Dharmdasani, S. Meiklejohn, V. Dave, C. Grier, D. McCoy, S. Savage, N. Weaver, A. C. Snoeren, K. Levchenko, in 21st Annual Network and Distributed System Security Symposium. Botcoin: monetizing stolen cycles (The Internet SocietySan Diego, 2014). https://www.ndss-symposium.org/ndss2014/botcoin-monetizing-stolen-cycles. xmrig, XMRig (2021). https://xmrig.com/ Accessed 21 Apr 2021. J. Grunzweig, The rise of the cryptocurrency miners (2018). https://unit42.paloaltonetworks.com/unit42-rise-cryptocurrency-miners/ Accessed 21 Apr 2021. V. Bulavas, A. Kazantsev, A mining multitool (2018). https://securelist.com/a-mining-multitool/86950/ Accessed 21 Apr 2021. SecurityFocus, Microsoft Windows SMB Server CVE-2017-0144 Remote Code Execution Vulnerability (2017). https://www.securityfocus.com/bid/96704 Accessed 21 Apr 2021. Dr. WEB Anti-virus, Linux.BtcMine.174 (2018). https://vms.drweb.com/virus/?i=17645163 Accessed 21 Apr 2021. E. Vasilenko, O. Mamedov, To crypt, or to mine - that is the question (2018). https://securelist.com/to-crypt-or-to-mine-that-is-the-question/86307/ Accessed 21 Apr 2021. P. Papadopoulos, P. Ilia, E. P. Markatos, Truth in web mining: Measuring the profitability and cost of cryptominers as a web monetization model. CoRR (2018). http://arxiv.org/abs/1806.01994. Coinhive, Coinhive Monero JavaScript Mining (2019). https://web.archive.org/web/20190109010215/https://coinhive.com/ Accessed 21 Apr 2021. C. Leonard, CoinHive cryptocurrency mining script injected into 1000s of government websites via BrowseAloud plugin (2018). https://www.forcepoint.com/blog/x-labs/coinhive-cryptocurrency-mining-script-injected-1000s-government-websites Accessed 21 Apr 2021. C. Liu, J. Chen, Google's DoubleClick abused to deliver miners (2018). https://blog.trendmicro.com/trendlabs-security-intelligence/malvertising-campaign-abuses-googles-doubleclick-to-deliver-cryptocurrency-miners/ Accessed 21 Apr 2021. M. Hron, D. Jursa, MikroTik mayhem: Cryptomining campaign abusing routers (2018). https://blog.avast.com/mikrotik-routers-targeted-by-cryptomining-campaign-avast Accessed 21 Apr 2021. E. Tekiner, A. Acar, A. S. Uluagac, E. Kirda, A. A. Selcuk, SoK: cryptojacking malware (2021). https://doi.org/2103.03851. R. Sekar, A. Gupta, J. Frullo, T. Shanbhag, A. Tiwari, H. Yang, S. Zhou, in Proceedings of the 9th ACM Conference on Computer and Communications Security, ed. by V. Atluri. Specification-based anomaly detection: a new approach for detecting network intrusions (ACMWashington, 2002), pp. 265–274. https://doi.org/10.1145/586110.586146. Y. Huang, W. Lee, in 7th International Workshop on Recent Advances in Intrusion Detection, 3224, ed. by E. Jonsson, A. Valdes, and M. Almgren. Attack analysis and detection for ad hoc routing protocols (SpringerSophia Antipolis, 2004), pp. 125–145. https://doi.org/10.1007/978-3-540-30143-1_7. F. Erlacher, F. Dressler, On high-speed flow-based intrusion detection using snort-compatible signatures. IEEE Trans Dependable Secure Comput., 1–1 (2020). B. Anderson, D. A. McGrew, in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Machine learning for encrypted malware traffic classification: accounting for noisy labels and non-stationarity (ACMHalifax, 2017), pp. 1723–1732. https://doi.org/10.1145/3097983.3098163. X. Deng, J. Mirkovic, in 11th USENIX Workshop on Cyber Security Experimentation and Test, ed. by C. S. Collberg, P. A. H. Peterson. Malware analysis through high-level behavior (USENIX AssociationBaltimore, 2018). https://www.usenix.org/conference/cset18/presentation/deng. K. Bartos, M. Sofka, V. Franc, in 25th USENIX Security Symposium, USENIX Security, ed. by T. Holz, S. Savage. Optimized invariant representation of network traffic for detecting unseen malware variants (USENIX AssociationAustin, 2016), pp. 807–822. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/bartos. B. A. Alahmadi, I. Martinovic, in 2018 APWG Symposium on Electronic Crime Research, eCrime. MalClassifier: malware family classification using network flow sequence behaviour (IEEESan Diego, 2018), pp. 1–13. https://doi.org/10.1109/ECRIME.2018.8376209. M. Piskozub, R. Spolaor, I. Martinovic, Malalert: detecting malware in large-scale network traffic using statistical features. SIGMETRICS Perform. Evaluation Rev.46(3), 151–154 (2018). https://doi.org/10.1145/3308897.3308961. G. Gu, R. Perdisci, J. Zhang, W. Lee, in Proceedings of the 17th USENIX Security Symposium, ed. by P. C. van Oorschot. BotMiner: clustering analysis of network traffic for protocol- and structure-independent botnet detection (USENIX AssociationSan Jose, 2008), pp. 139–154. http://www.usenix.org/events/sec08/tech/full_papers/gu/gu.pdf. L. Bilge, D. Balzarotti, W. K. Robertson, E. Kirda, C. Kruegel, ed. by R. H. Zakon. 28th Annual Computer Security Applications Conference (ACMOrlando, 2012), pp. 129–138. https://doi.org/10.1145/2420950.2420969. D. Zhao, I. Traoré, B. Sayed, W. Lu, S. Saad, A. A. Ghorbani, D. Garant, Botnet detection based on traffic behavior analysis and flow intervals. Comput. Secur.39:, 2–16 (2013). https://doi.org/10.1016/j.cose.2013.04.007. Y. Gao, Z. Li, Y. Chen, in 26th IEEE International Conference on Distributed Computing Systems (ICDCS). A DoS resilient flow-level intrusion detection approach for high-speed networks (IEEE Computer SocietyLisboa, 2006), p. 39. https://doi.org/10.1109/ICDCS.2006.6. A. Lakhina, M. Crovella, C. Diot, in Proceedings of the ACM SIGCOMM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, ed. by R. Guérin, R. Govindan, and G. Minshall. Mining anomalies using traffic feature distributions (ACMPhiladelphia, 2005), pp. 217–228. https://doi.org/10.1145/1080091.1080118. G. Münz, G. Carle, in 10th IFIP/IEEE International Symposium on Integrated Network Management. Real-time analysis of flow data for network attack detection (IEEEMunich, 2007), pp. 100–108. https://doi.org/10.1109/INM.2007.374774. A. Sperotto, R. Sadre, A. Pras, in IP Operations and Management, 8th IEEE International Workshop, 5275, ed. by N. Akar, M. Pióro, and C. Skianis. Anomaly characterization in flow-based traffic time series (SpringerSamos Island, 2008), pp. 15–27. https://doi.org/10.1007/978-3-540-87357-0_2. Q. Zhao, J. J. Xu, A. Kumar, Detection of super sources and destinations in high-speed networks: algorithms, analysis and evaluation. IEEE J. Sel. Areas Commun.24(10), 1840–1852 (2006). https://doi.org/10.1109/JSAC.2006.877139. I. Paredes-Oliva, P. Barlet-Ros, J. Solé-Pareta, in International Workshop on Traffic Monitoring and Analysis, 5537, ed. by M. Papadopouli, P. Owezarski, and A. Pras. Portscan detection with sampled NetFlow (SpringerAachen, 2009), pp. 26–33. https://doi.org/10.1007/978-3-642-01645-5_4. T. Dübendorfer, B. Plattner, in 14th IEEE International Workshops on Enabling Technologies (WETICE). Host behaviour based early detection of worm outbreaks in internet backbones (IEEE Computer SocietyLinköping, 2005), pp. 166–171. https://doi.org/10.1109/WETICE.2005.40. T. Dübendorfer, A. Wagner, B. Plattner, in Proceedings of the First IEEE International Workshop on Critical Infrastructure Protection IWCIP '05. A framework for real-time worm attack detection and backbone monitoring (IEEE Computer SocietyUSA, 2005), pp. 3–12. https://doi.org/10.1109/IWCIP.2005.2. M. Conti, L. V. Mancini, R. Spolaor, N. V. Verde, Analyzing android encrypted network traffic to identify user actions. IEEE Trans. Inf. Forensics Secur.11(1), 114–125 (2016). https://doi.org/10.1109/TIFS.2015.2478741. M. Conti, L. V. Mancini, R. Spolaor, N. V. Verde, in Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, CODASPY, ed. by J. Park, A. C. Squicciarini. Can't you hear me knocking: identification of user actions on Android apps via traffic analysis (ACMSan Antonio, 2015), pp. 297–304. https://doi.org/10.1145/2699026.2699119. E. Papadogiannaki, C. Halevidis, P. Akritidis, L. Koromilas, 11050, ed. by M. Bailey, T. Holz, M. Stamatogiannakis, and S. Ioannidis. 21st International Symposium on Research in Attacks, Intrusions, and Defenses (SpringerHeraklion, 2018), pp. 315–334. https://doi.org/10.1007/978-3-030-00470-5_15. L. Bernaille, R. Teixeira, I. Akodkenou, A. Soule, K. Salamatian, Traffic classification on the fly. Comput. Commun. Rev.36(2), 23–26 (2006). https://doi.org/10.1145/1129582.1129589. D. Rossi, S. Valenti, in Proceedings of the 6th International Wireless Communications and Mobile Computing Conference, ed. by A. Helmy, P. Mueller, and Y. Zhang. Fine-grained traffic classification with NetFlow data (ACMCaen, 2010), pp. 479–483. https://doi.org/10.1145/1815396.1815507. V. Carela-Español, P. Barlet-Ros, A. Cabellos-Aparicio, J. Solé-Pareta, Analysis of the impact of sampling on netflow traffic classification. Comput. Netw.55(5), 1083–1099 (2011). https://doi.org/10.1016/j.comnet.2010.11.002. T. T. T. Nguyen, G. J. Armitage, A survey of techniques for internet traffic classification using machine learning. IEEE Commun. Surv. Tutor.10(1-4), 56–76 (2008). https://doi.org/10.1109/SURV.2008.080406. M. Conti, Q. Li, A. Maragno, R. Spolaor, The dark side(-channel) of mobile devices: a survey on network traffic analysis. IEEE Commun. Surv. Tutor.20(4), 2658–2713 (2018). https://doi.org/10.1109/COMST.2018.2843533. J. Rauchberger, S. Schrittwieser, T. Dam, R. Luh, D. Buhov, G. Pötzelsberger, H. Kim, in Proceedings of the 13th International Conference on Availability, Reliability and Security, ed. by S. Doerr, M. Fischer, S. Schrittwieser, and D. Herrmann. The other side of the coin: a framework for detecting and analyzing web-based cryptocurrency mining campaigns (ACMHamburg, 2018), pp. 18–11810. https://doi.org/10.1145/3230833.3230869. M. Musch, C. Wressnegger, M. Johns, K. Rieck, Web-based cryptojacking in the wild. CoRR abs/1808.09474 (2018). http://arxiv.org/abs/1808.09474. I. Petrov, L. Invernizzi, E. Bursztein, CoinPolice: detecting hidden cryptojacking attacks with neural networks (2020). http://arxiv.org/abs/2006.10861. A. Romano, Y. Zheng, W. Wang, in 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). Minerray: semantics-aware analysis for ever-evolving cryptojacking detection, (2020), pp. 1129–1140. M. Solanas, J. Hernandez-Castro, D. Dutta, Detecting fraudulent activity in a cloud using privacy-friendly data aggregates. CoRR abs/1411.6721 (2014). http://arxiv.org/abs/1411.6721. R. Ning, C. Wang, C. Xin, J. Li, L. Zhu, H. Wu, in IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. Capjack: capture in-browser crypto-jacking by deep capsule network through behavioral analysis, (2019), pp. 1873–1881. https://doi.org/10.1109/INFOCOM.2019.8737381. R. Tahir, M. Huzaifa, A. Das, M. Ahmad, C. A. Gunter, F. Zaffar, M. Caesar, N. Borisov, in 20th International Symposium on Research in Attacks, Intrusions, and Defenses, 10453, ed. by M. Dacier, M. Bailey, M. Polychronakis, and M. Antonakakis. Mining on someone else's dime: mitigating covert mining operations in clouds and enterprises (SpringerAtlanta, 2017), pp. 287–310. https://doi.org/10.1007/978-3-319-66332-6_13. M. Conti, A. Gangwal, G. Lain, S. G. Piazzetta, Detecting covert cryptomining using HPC. CoRR abs/1909.00268 (2019). http://arxiv.org/abs/1909.00268. M. Bissaliyev, A. Nyussupov, S. Mussiraliyeva, Enterprise security assessment framework for cryptocurrency mining based on monero. J. Math. Mech. Comput. Sci.98(2), 67–76 (2018). https://doi.org/10.26577/jmmcs-2018-2-400. D. Draghicescu, A. Caranica, A. Vulpe, O. Fratu, in 2018 International Conference on Communications (COMM). Crypto-mining application fingerprinting method, (2018), pp. 543–546. https://doi.org/10.1109/ICComm.2018.8484745. F. Gomes, M. Correia, in 2020 IEEE 19th International Symposium on Network Computing and Applications (NCA). Cryptojacking detection with cpu usage metrics, (2020), pp. 1–10. https://doi.org/10.1109/NCA51143.2020.9306696. G. Gomes, L. Dias, M. Correia, in 2020 IEEE 19th International Symposium on Network Computing and Applications (NCA). Cryingjackpot: network flows and performance counters against cryptojacking, (2020), pp. 1–10. https://doi.org/10.1109/NCA51143.2020.9306698. F. Naseem, A. Aris, L. Babun, E. Tekiner, S. Uluagac, in Network and Distributed Systems Security (NDSS) Symposium 2021, February 21-25, 2021 Virtual. Minos*: a lightweight real-time cryptojacking detection system, (2021). https://dx.doi.org/10.14722/ndss.2021.24444. S. S. Ali, A. ElAshmawy, A. F. Shosha, in Proceedings of the International Conference on Security and Management (SAM). Memory forensics methodology for investigating cryptocurrency protocols, (2018), pp. 153–159. A. Gangwal, M. Conti, Cryptomining cannot change its spots: detecting covert cryptomining using magnetic side-channel. IEEE Transactions on Information Forensics and Security. 15:, 1630–1639 (2020). J. D'Herdt, Detecting crypto currency mining in corporate environments. Technical report, SANS (2018). https://www.sans.org/reading-room/whitepapers/threats/paper/35722. E. L. Jamtel, in 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). Swimming in the monero pools, (2018), pp. 110–114. https://doi.org/10.1109/IMF.2018.00016. A. Swedan, A. N. Khuffash, O. Othman, A. Awad, ed. by A. Abuarqoub, B. Adebisi, M. Hammoudeh, S. Murad, and M. Arioua. Proceedings of the 2nd International Conference on Future Networks and Distributed Systems (ACMAmman, 2018), pp. 23–12310. https://doi.org/10.1145/3231053.3231076. H. N. C. Neto, M. A. Lopez, N. C. Fernandes, D. M. F. Mattos, 75. Minecap: super incremental learning for detecting and blocking cryptocurrency mining on software-defined networking, (2020), pp. 121–131. https://doi.org/10.1007/s12243-019-00744-4. M. Caprolu, S. Raponi, G. Oligeri, R. D. Pietro, Cryptomining makes noise: detecting cryptojacking via machine learning. Comput. Commun.171:, 126–139 (2021). https://doi.org/10.1016/j.comcom.2021.02.016. A. Pastor, A. Mozo, S. Vakaruk, D. Canavese, D. R. López, L. Regano, S. Gómez-Canaval, A. Lioy, Detection of encrypted cryptomining malware connections with machine and deep learning. IEEE Access. 8:, 158036–158055 (2020). https://doi.org/10.1109/ACCESS.2020.3019658. Y. Feng, D. Sisodia, J. Li, in Proceedings of the 15th ACM Asia Conference on Computer and Communications Security ASIA CCS '20. Poster: content-agnostic identification of cryptojacking in network traffic (Association for Computing MachineryNew York, 2020), pp. 907–909. https://doi.org/10.1145/3320269.3405440. i Muñoz J.Z., J. Suárez-Varela, P. Barlet-Ros, in 5th IEEE International Symposium on Measurements & Networking, M&N. Detecting cryptocurrency miners with NetFlow/IPFIX network measurements (IEEECatania, 2019), pp. 1–6. https://doi.org/10.1109/IWMN.2019.8804995. V. Veselý, M. Zádník, How to detect cryptocurrency miners? by traffic forensics!Dig. Investig.31:, 100884 (2019). https://doi.org/10.1016/j.diin.2019.08.002. F. Iglesias, T. Zseby, Analysis of network traffic features for anomaly detection. Mach. Learn.101(1-3), 59–84 (2015). https://doi.org/10.1007/s10994-014-5473-9. MathSciNet Google Scholar SupportXMR, List of all Monero Pools (2021). http://moneropools.com/ Accessed 21 Apr 2021. Slushpool, What is Vardiff (variable difficulty algorithm)? (2021). https://help.slushpool.com/en/support/solutions/articles/77000433929-what-is-vardiff-variable-difficulty-algorithm- Accessed 21 Apr 2021. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, E. Duchesnay, Scikit-learn: machine learning in Python. J. Mach. Learn. Res.12:, 2825–2830 (2011). MathSciNet MATH Google Scholar node-cryptonote-pool Developers, Mining pool for CryptoNote based coins such as Bytecoin and Monero. GitHub (2020). https://github.com/zone117x/node-cryptonote-pool. nodejs-pool Developers, nodejs-pool. GitHub (2020). https://github.com/Snipa22/nodejs-pool. Stichting Cuckoo Foundation, Cuckoo Sandbox - automated malware analysis (2021). https://cuckoosandbox.org/ Accessed 19 Apr 2021. K. Khade, X. Lin, WebCobra malware uses victims' computers to mine cryptocurrency (2018). https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/webcobra-malware-uses-victims-computers-to-mine-cryptocurrency/ Accessed 21 Apr 2021. N. Hyvärinen, NRSMiner updates to newer version (2019). https://labsblog.f-secure.com/2019/01/03/nrsminer-updates-to-newer-version/ Accessed 21 Apr 2021. Nanopool, Claymore-XMR-Miner (2017). https://github.com/nanopool/Claymore-XMR-Miner/blob/master/README.md Accessed 21 Apr 2021. OpenSSL Software Foundation, OpenSSL: Cryptography and SSL/TLS Toolkit (2020). https://www.openssl.org/docs/man1.0.2/man1/ciphers.html Accessed 21 Apr 2021. mineXMR.com, High performance Monero mining pool (2021). http://minexmr.com/ Accessed 21 Apr 2021. J. Grunzweig, Large scale Monero cryptocurrency mining operation using XMRig (2018). https://unit42.paloaltonetworks.com/unit42-large-scale-monero-cryptocurrency-mining-operation-using-xmrig/ Accessed 21 Apr 2021. CoinIMP, CoinIMP FREE JavaScript Mining (2020). https://www.coinimp.com/ Accessed 21 Apr 2021. PublicWWW, PublicWWW Source Code Search Engine (2020). https://publicwww.com/ Accessed 21 Apr 2021. Check Point, April 2019's most wanted malware: cyber criminals up to old 'trickbots' again (2019). https://www.checkpoint.com/press/2019/april-2019s-most-wanted-malware-cyber-criminals-up-to-old-trickbots-again/ Accessed 21 Apr 2021. Check Point, October 2019's most wanted malware: the decline of cryptominers continues, as Emotet Botnet expands rapidly (2019). https://www.checkpoint.com/press/2019/may-2019-most-wanted-malware-patch-now-to-avoid-the-bluekeep-blues/ Accessed 21 Apr 2021. Check Point, February 2020's most wanted malware: increase in exploits spreading the Mirai Botnet to IoT devices (2020). https://blog.checkpoint.com/2020/03/11/february-2020s-most-wanted-malware-increase-in-exploits-spreading-the-mirai-botnet-to-iot-devices/ Accessed 21 Apr 2021. Marathon Studios Inc., AbuseIPDB - IP address abuse reports (2021). https://www.abuseipdb.com/ Accessed 2021 Oct 15. Programmer All, Mac sample analysis (2020). https://programmerall.com/article/76021153067/ Accessed 15 Oct 2021. Peter "fracpete" Reutemann, Python wrapper for the Weka Machine Learning Workbench (2021). https://pypi.org/project/python-weka-wrapper/ Accessed 19 Apr 2021. Huawei Technologies Duesseldorf GmbH, Munich, Germany Michele Russo & Nedim Šrndić University of Liechtenstein, Vaduz, Liechtenstein Pavel Laskov Michele Russo Nedim Šrndić MR performed the data acquisition and software implementation and the main part of data analysis and interpretation. NŠ participated in data analysis and interpretation. All authors participated in the conception, design, and manuscript writing. All authors read and approved the final manuscript. Correspondence to Michele Russo. The authors are currently applying for patents relating to the content of the manuscript. MR and NŠ have received salary from an organization that has applied for patents relating to the content of the manuscript. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Russo, M., Šrndić, N. & Laskov, P. Detection of illicit cryptomining using network metadata. EURASIP J. on Info. Security 2021, 11 (2021). https://doi.org/10.1186/s13635-021-00126-1
CommonCrawl
Nano Express Highly Reflective Thin-Film Optimization for Full-Angle Micro-LEDs Zhi-Ting Ye ORCID: orcid.org/0000-0002-9224-66271, Wen-Tsung Ho2 & Chia-Hui Chen1 Nanoscale Research Letters volume 16, Article number: 152 (2021) Cite this article Displays composed of micro-light-emitting diodes (micro-LEDs) are regarded as promising next-generation self-luminous screens and have advantages such as high contrast, high brightness, and high color purity. The luminescence of such a display is similar to that of a Lambertian light source. However, owing to reduction in the light source area, traditional secondary optical lenses are not suitable for adjusting the light field types of micro-LEDs and cause problems that limit the application areas. This study presents the primary optical designs of dielectric and metal films to form highly reflective thin-film coatings with low absorption on the light-emitting surfaces of micro-LEDs to optimize light distribution and achieve full-angle utilization. Based on experimental results with the prototype, that have kept low voltage variation rates, low optical losses characteristics, and obtain the full width at half maximum (FWHM) of the light distribution is enhanced to 165° and while the center intensity is reduced to 63% of the original value. Hence, a full-angle micro-LEDs with a highly reflective thin-film coating are realized in this work. Full-angle micro-LEDs offer advantages when applied to commercial advertising displays or plane light source modules that require wide viewing angles. Displays have become an indispensable part of human life, including smartphones, computer monitors, television (TV), and commercial advertising screens, which are some examples of the most used display technologies. The current mainstream display technologies include liquid crystal displays (LCDs), organic light-emitting diodes (OLEDs), and micro-sized light-emitting diodes (micro-LEDs) [1,2,3]. LCDs have advantages such as long life, low price, and mature technology [4,5,6]; however, the overall light output efficiencies of large-sized direct-lit backlight LCDs are still low and their structure is complex, which makes it difficult to reduce the overall thickness [7,8,9]. OLEDs have the advantages of self-luminescence when applied to displays, small size, high flexibility, high contrast, and wide color gamut [10,11,12]; however, to solve the problem of poor color purity caused by mixing of the red, green, and blue sub-pixels when emitting light, it is necessary to use complex and fine metal masks, which limit the resolution and brightness of OLED displays as well as reduce their overall life spans owing to the characteristics of the internal organic materials [13,14,15]. Micro-LEDs have the advantages of high brightness, long life, and high efficiency, in addition to the advantages of LCDs and OLEDs [16,17,18]. Micro-LEDs displays are self-luminous and use extremely small micro-LEDs chips as point light sources, thereby offering advantages of high luminous efficiency, long life, high color purity, high contrast, and high chemical stability [19,20,21]; however, such displays still have challenges, such as shrinkage of the micro-LEDs sizes and relatively high substrate accuracy of the equipment, thereby causing problems with the transfer technology of a large number of micro-LEDs [22,23,24]. In addition to the difficulties with the manufacturing process, when using micro-LEDs as light sources, the displayed light field patterns have Lambertian characteristics, which causes problems such as limited viewing angles when applied to commercial advertising displays [25]. Thus, increasing the light-emitting angles of micro-LEDs not only increases the viewing angles of displays but also reduces their numbers and thickness when used as the backlights of LCDs. Thus far, there is still a lack of research on optimizing the light-emitting angles of micro-LEDs, so improving this area of study is expected to be beneficial [26,27,28]. In recent years, scholars have proposed optical designs to optimize the light-emitting angles. Spägele et al. proposed supercell metasurfaces (SCMS) that use the coupling between adjacent atoms in the supercell to achieve wide-angle effects; Estakhri et al. proposed the design of a highly efficient back-reflected visible light gradient metasurface composed of TiOx nanowires to achieve wide angles; Deng et al. proposed thin metal nano-gratings with rectangular grooves to construct metasurfaces to increase the light exit angles [29,30,31]. Qiu et al. proposed Au nanomesh structures with disordered double-sized apertures as a new type of transparent conductive film to achieve a wide viewing angles; Liu et al. proposed using graphene as a transparent conductive film because of its advantages of optical anisotropy and high light transmittance in large-angle incident areas; additionally, for infrared LEDs, Lee et al. studied the development of titanium–indium–tin oxide (TITO) thin films for low-temperature near-infrared light-emitting diodes (NIR-LEDs) by inserting 2-nm-thick Ti barriers between the top layers of the NIR-LEDs and ITO to achieve wide-angle effects [32,33,34]. Research related to modulating the light distributions using secondary optical elements have also been reported. Run et al. designed a new free-form surface lens whose inner surface is a cylinder and outer surface is a free-form surface to optimize the light-emitting angles; Lin et al. proposed a Cartesian candela-distributed free-form lens array to optimize the LED lens array layout to achieve wide angles [35, 36]. In addition, research on modulation of the light shape for Chip Scale Package-Light-emitting diode (CSP LEDs) include changing the traditional packaging structures and light distribution optimization for flat light sources [37, 38]. Several researchers have also considered various LED substrate designs to change the light field patterns. Lai et al. used a sulfuric acid wet etching process to form a triangular pyramid pattern on c-plane sapphire substrates to achieve higher light extraction efficiencies and increase the light angles; Lan et al. proposed a patterned sapphire substrate (PSS) combined with packaged inverted trapezoidal flip-chip micro-LEDs that show strong peaks and large light angles; Zhang et al. studied flip-chip deep-ultraviolet LEDs with nano-patterned sapphire substrate (NPSS) structures to show that the NPSS structure can achieve wide angles and enhance light extraction efficiency [39,40,41]. Optical components have also been added to optical modules to modulate the light distributions. Wang et al. proposed a compact high-directional backlight module combined with a striped diffuse reflector to diffuse light through a compact light guide plate and realize wide viewing angles; Li et al. designed a quarter-wave plate of a multi-twist retarder to achieve achromatic aberration effects and wide viewing angles [42, 43]. To achieve a wide viewing angle, the LCD must be design and match wide-angle backlit and liquid crystal material. In this process, there are problems of lateral light leakage and color shift. With three groups directional backlights and a fast-switching LCD panel, a time-multiplexed light field display with a 120-degree wide viewing angle is demonstrated [44]. Thus, previous research on improving the light-emitting angles lack relevant investigations into the design of optical films on micro-LEDs chips to increase the light-emitting angles. As the sizes of micro-LEDs have been greatly reduced in recent times, it is impossible to adjust the light field types using secondary optical lenses as in traditional LEDs. Previous studies have also proposed adjusting the light field types with metal films; metals have excellent reflectivity at different angles, but the materials have high light absorption coefficients that reduce the light output efficiencies. The reflectivity of dielectric materials at different angles is not relatively better than those of metals, but the materials themselves have low light absorption coefficients. This paper proposes a primary optical design for dielectric and metal films to obtain low-absorption and high-reflectivity thin films deposited on the surfaces of micro-LEDs and achieve full-angle light distribution while accounting for the light output efficiencies and full-angle light emissions of the micro-LEDs. Full-angle micro-LEDs offer advantages when applied to commercial advertising displays or plane light source modules that require wide viewing angles. Micro-LEDs Chip Sizes and Light Field Types The dimensions of the micro-LEDs used in this study based on length Lc, width Wc, and height Hc are 150 µm, 85 µm, and 85 µm, respectively. The light distribution curve of the bare chip is shown in Fig. 1. The intensity of the center point in the normal direction IC is 92%, the peak angle Ipeak is 15°, and the calculation method for the intensity of the center point is expressed by Eq. (1). From the light distribution curve, it is seen that micro-LEDs have similar Lambertian light types, with a full width at half maximum (FWHM) of 135°; therefore, increasing the light-emitting angles to obtain full-angle luminescence without the secondary optical lens is the main focus of research in this work. $$\frac{{I_{{{\text{C}} }} \,\left( {{\text{Center}}\,{\text{light}}\,{\text{intensity}}} \right)}}{{I_{{{\text{peak}}}} \,\left( {{\text{Peak}}\,{\text{angle}}\,{\text{intensity}}} \right)}} \times 100\%$$ Micro-LEDs chip light distribution curve Among the aforementioned parameters, low central light intensity and increased peak luminous angle help improve the uniformity and viewing angle [45]. This study presents the design of a highly reflective thin-film (HRTF) layer on the surface of the micro-LEDs chip, which includes a dielectric film made of TiO2/SiO2 stacked dielectric materials and a metal film made of Al. The structure of the micro-LEDs and the light path through it are shown in Fig. 2. The light exits through the multiple quantum wells (MQWs) layer and is partially reflected by the HRTF. Thereafter, the light exits from the sidewall of the Al2O3 layer, with an increased light exit angle from the micro-LEDs to realize a full-angle light exit. The light path within the full-angle micro-LEDs with HRTF coating Materials of the HRTF The choice of materials used in the optical film is crucial to achieve the desired characteristics. First, the material must have a low extinction coefficient in the required wavelength band to avoid reducing the light extraction efficiency owing to large absorption; then, the material's adhesion, physical and chemical stabilities, and light transmittance must be considered. The dielectric material TiO2/SiO2 has excellent characteristics for these properties in the visible light band. Al has a relatively high extinction coefficient, but its reflectivity cannot be easily decreased with increasing incident angles; however, it can withstand high light intensities. Based on the above characteristics, the high refractive index material (H) TiO2 and low refractive index material (L) SiO2 are used for the dielectric film, and Al is used for the metal film, with Al2O3 as the substrate for the optical thin-film design. The refractive indices of the materials used in this study are shown in Table 1 at the dominant wavelength of 460 nm. Table 1 Refractive indices and extinction coefficients of the materials used in this study at a dominant wavelength of 460 nm HRTF Design Optimization The substrate used for the light-emitting surface of the micro-LEDs is Al2O3. We designed the HRTF on the substrate and used the dielectric and metal films to improve reflectivity while maintaining high luminous efficiency. The goal here was to achieve a reflectance > 90% at the dominant wavelength of 460 nm. The principle behind the design of the HRTF is to use the destructive and constructive interference characteristics of light to improve reflectivity. Maximum light interference in the film medium occurs when the optical thickness is 1/4 of the wavelength, and the interface reflectivity R at this time is calculated according to Eq. (2) [46]. $$R = \frac{{n_{{\text{s}}} n_{2}^{2P} - n_{{{\text{air}}}} n_{1}^{2P} }}{{n_{{\text{s}}} n_{2}^{2P} + n_{{{\text{air}}}} n_{1}^{2P} }}$$ Here, P is the number of TiO2–SiO2 periods,\({ }n_{{\text{s}}}\) is the refractive index of the substrate, \(n_{1}\) is the refractive index of TiO2, \(n_{2}\) is the refractive index of SiO2, and \(n_{{{\text{air}}}}\) is the refractive index of the air medium. The transmission optical thickness is 1/4 of the wavelength; hence, the physical thicknesses of Al, TiO2, and SiO2 are 20 nm, 47.78 nm, and 78.50 nm, respectively. This study uses the Macleod optical simulation software to simulate four thin-film structures for pure Al, Al/(HL), Al/(HL)2, and Al/(HL)3. Figure 3 shows the relationship between the wavelength and reflectance of pure Al, Al/(HL), (HL)2, Al/(HL)2, and Al/(HL)3 of the five membrane stack structures in the simulated wavelength range of 400–500 nm. The reflectivity of pure Al, Al/(HL), (HL)2, Al/(HL)2, and Al/(HL)3 at 460 nm is 85.53%, 86.15%, 71.84%, 90.23%, and 93.04%, respectively. Reflectance of pure Al, Al/(HL), (HL)2, Al/(HL)2, and Al/(HL)3 was simulated at wavelengths of 400–500 nm Table 2 shows the reflectance, transmittance, and absorption ratios of the five kinds of membrane stack structures, namely pure Al, Al/(HL), (HL)2, Al/(HL)2, and Al/(HL)3. The transmittance rate of pure aluminum at 460 nm is 5% and absorption rate is 9.47%, which is the highest absorption rate among the five types of membrane stacks. The transmittance of the (HL)2 membrane stack at 460 nm is 28.06% and absorption rate is 0.1%; this absorption rate directly affects the overall light extraction efficiency; further, this membrane stack structure has the smallest absorption rate, and its reflectivity is only 71.84%. The Al/(HL)2 membrane stack has a transmittance of 4.38% at 460 nm and an absorption rate of 5.39%; this membrane stack structure takes into account the overall light extraction efficiency and full-angle light distribution. Considering both the radiant flux and overall light extraction efficiency, the Al/(HL)2 membrane stack structure was used in this study for the HRTF coating. Table 2 Reflectance, transmittance, and absorption rates of pure Al, Al/(HL), (HL)2, Al/(HL)2, and Al/(HL)3 at 460 nm Figure 4 shows the simulated Al/(HL)2 and (HL)2 as well as their corresponding reflectance and transmittance graphs for 400–500 nm. The average reflectance and transmittance of Al/(HL)2 are 89.6% and 4.54%, and the average reflectance and transmittance of (HL)2 are 70.3% and 29.56%, respectively. It can be seen from the simulation results that adding the thin aluminum layer increases the reflectivity by a factor of 1.27. Reflectance and transmittance ratios of the simulated thin-film structures of Al/(HL)2 and (HL)2 for wavelengths in the range of 400–500 nm Figure 5 illustrates the changes in (a) the transmittance and reflectance of Al/(HL)2 at different incident angles; from 0° to 60°, the average reflectance is 87.7% and average transmittance is 6.97%. Figure 5b. The transmittance and reflectance of (HL)2 at different incident angles; from 0° to 60°, the average reflectance is 68.99% and average transmittance is 30.88%. In full-angle reflective film design, Al/(HL)2 can be seen from the simulation results that adding the thin aluminum layer increases the full angle of average reflectance by a factor of 1.27. Reflectance and transmittance ratio changes of the simulated a Al/(HL)2 for incident angles of 0–90°and b (HL)2 for incident angles of 0–90° Figure 6 shows the simulated wavelength/incidence angle/reflectivity 3D diagram of Al/(HL)2 for incident angles of 0–25° and average reflectivity exceeding 90% in the wavelength range of 440–480 nm. 3D relationship diagram of the simulated wavelengths, incident angles, and reflectivity of Al/(HL)2 Figure 7 shows the scanning electron microscope (SEM) images of the HRTF coating of the micro-LEDs chip. The chip length Lc is 240 µm, width Wc is 140 µm, and height Hc is 100 µm. Figure 8a shows the top view, and Fig. 8b shows the bottom view. SEM images of micro-LEDs chip: a top and b bottom views Cross-sectional SEM image of the HRTF Figure 8 shows the cross-sectional SEM image of the micro-LEDs chip with HRTF coating. The HRTF prototype film stack includes an Al film thickness of 20.6 nm, TiO2 dielectric film thicknesses of 46.3 nm and 46.2 nm, and SiO2 dielectric film thicknesses of 77.5 nm and 77.1 nm. Figure 9 shows the measured luminance–current–voltage (L–I–V) curve. Under an input current of 30 mA, the results show that without the HRTF coating, the output radiation flux, the voltage, and external quantum efficiency (EQE) are 33.833 mW, 3.293 V, and 41.84%, respectively. The voltage, output power, and EQE of the HRTF coating are 3.301 V, 32.757 mW, and 40.51%, respectively. The results show the HRTF coating hardly affects the current versus voltage (IV) curve characteristics of the micro-LEDs. The EQE of HRTF coating is decay 3.178%. Photoelectric characteristics of the micro-LEDs without and with HRTF coating As the input current increases to 50 mA, this voltage and output power increase to 3.5 V and 48.165 mW, respectively, and the radiant flux is only about 3.3% lower than that of the micro-LEDs without the HRTF coating. This shows that micro-LEDs with HRTF coatings have low voltage variation rates and low optical losses characteristics. Figure 10 shows the drift characteristics of the dominant wavelength of the current for the micro-LEDs with HRTF stack coatings. The orange line represents the bare micro-LEDs and blue line is the micro-LEDs with HRTF coating. When the current increases from 2 to 30 mA, the peak wavelength changes from 465.47 to 460.01 nm, indicating that the micro-LEDs coated with the stack of Al/(HL)2 membranes show only 5.46 nm change for the dominant wavelength of the current; hence, these results show that the photoelectric properties of the original bare micro-LEDs are maintained. Changes in the dominant wavelength characteristic curves of micro-LEDs with and without Al/(HL)2 film stack coating Figure 11 shows the temperature versus peak wavelength characteristic curves. The orange line represents the bare micro-LEDs, and the blue line is the micro-LEDs with HRTF coating. As the temperature increases from 25 to 105 °C, the peak wavelength is red-shifted from 460.09 to 462.45 nm; these two curves show that the original photoelectric characteristics are still maintained after the HRTF coating. The dominant wavelength shift is only 2.36 nm. Characteristic curves of the peak wavelengths for micro-LEDs with and without Al/(HL)2 film stack coatings based on temperature variations The long-term stability test of HRTF is shown in Fig. 12. The test ambit temperature is 25 ℃ and the drive current is 30 mA. At 1000 h, the radiant flux can be maintained at 98.5%. The long-term stability test of HRTF Figure 13 shows the light distribution curves of the bare and HRTF-coated micro-LEDs. The black line represents the light field pattern of the bare micro-LEDs, whose FWHM is 135°, center light intensity is 92%, and peak angle is 15°. The red line represents the light distribution of the micro-LEDs with HRTF coating, whose FWHM is increased to 165°, center light intensity is reduced to 63%, and peak angle is increased to 37.5°. Light distribution curves of bare and HRTF-coated micro-LEDs Figure 14 shows the diagram of the luminous distributions of the (a) bare and HRTF-coated micro-LEDs. Figure 14b shows that the luminous distribution of the micro-LEDs with HRTF coating has wider angles and a more uniform distribution. Schematic of the luminous distributions of a bare and b HRTF-coated micro-LEDs The chromatic aberration between the different areas of the HRTF as a large wide-angle display screen is shown in Fig. 15. Reflectance relationship of different wavelengths corresponding to HRTF This article is based on the wavelength range of 440–460 nm to optimize the design of HRTF. If it is applied to full color in the future, the thickness of the aluminum film be increased to 50 nm or more, and it will be better color uniformity at the global wavelength (400–780 nm). We propose the design of a HRTF coating on the surfaces of micro-LEDs to increase their light distribution angles to achieve full viewing angles. We use a primary optical design to modulate the light shapes of the micro-LEDs without secondary optical elements. The HRTF film stack structure is optimized using Al/(HL)2 to obtain high reflection and low absorption. Measurements on prototype fabricated micro-LEDs show that the L–I–V curve has almost no impact on the I–V characteristics of the micro-LEDs under an input current of 30 mA with the HRTF coating, and the radiation flux is only 3.3% lower than that of the bare micro-LEDs. In terms of light-emitting angles, the center light intensities of the micro-LEDs with HRTF coating are reduced from 92 to 63%, the peak angle increases from 15° to 37.5°, and the FWHM is enhanced from 135° to 165°. The results of evaluation experiments show that micro-LEDs with HRTF coating have low voltage variation rates, low optical losses, and large full-angle light distribution of 165°. The full-angle micro-LEDs are fabricated with consideration of the overall light efficiency while still maintaining the photoelectric characteristics of bare micro-LEDs; these micro-LEDs offer advantages when applied to displays or plane light source modules that require wide viewing angles. The datasets supporting the conclusions of this article are available in the article. micro-LEDs: Micro-light-emitting diodes FWHM: Full width at half maximum LCDs: Liquid crystal displays OLEDs: Organic light-emitting diodes SCMS: Supercell metasurfaces TITO: Titanium–indium–tin oxide NIR-LEDs: Near-infrared light-emitting diodes CSP-LEDs: Chip Scale Package-Light-emitting diode PSS: Patterned sapphire substrate NPSS: Nano-patterned sapphire substrate L c : Micro-LEDs length W c : Micro-LEDs width H c : Micro-LEDs height I peak : Peak angle intensity I C : Center light intensity HRTF: Highly reflective thin film MQW: Multiple quantum well High refractive index material Low refractive index material k : Extinction coefficient L–I–V: Luminance–current–voltage Current versus voltage Chen HW, Lee JH, Lin BY, Chen S, Wu ST (2018) Liquid crystal display and organic light-emitting diode display: present status and future perspectives. Light Sci Appl 7:17168. https://doi.org/10.1038/lsa.2017.168 Ko YH, Jalalah M, Lee SJ, Park JG (2018) Super ultra-high resolution liquid-crystal-display using perovskite quantum-dot functional color-filters. Sci Rep. https://doi.org/10.1038/s41598-018-30742-w Huang Y, Hsiang EL, Deng MY, Wu ST (2020) Mini-LED, Micro-LED and OLED displays: present status and future perspectives. Light Sci Appl. https://doi.org/10.1038/s41377-020-0341-9 Chen E, Guo J, Jiang Z, Shen Q, Ye Y, Xu S, Sun J, Yan Q, Guo T (2021) Edge/direct-lit hybrid mini-LED backlight with U-grooved light guiding plates for local dimming. Opt Express. https://doi.org/10.1364/OE.421346 Chen H, Zhu R, Li MC, Lee SL, Wu ST (2017) Pixel-by-pixel local dimming for high-dynamic-range liquid crystal displays. Opt Express. https://doi.org/10.1364/OE.25.001973 Huang B-L, Guo T-L, Xu S, Ye Y, Chen E-G, Lin Z-X (2019) Color converting film with quantum-dots for the liquid crystal displays based on inkjet printing. IEEE Photonics J. https://doi.org/10.1109/jphot.2019.2911308 Jiang Y, Qin G, Xu X, Zhou L, Lee S, Yang DK (2018) Image flickering-free polymer stabilized fringe field switching liquid crystal display. Opt Express. https://doi.org/10.1364/OE.26.032640 Cunningham PD, Souza JB Jr, Fedin I, She C, Lee B, Talapin DV (2016) Assessment of anisotropic semiconductor nanorod and nanoplatelet heterostructures with polarized emission for liquid crystal display technology. ACS Nano. https://doi.org/10.1021/acsnano.5b07949 Liu Y, Lai J, Li X, Xiang Y, Li J, Zhou J (2017) A quantum dot array for enhanced tricolor liquid-crystal display. IEEE Photonics J. https://doi.org/10.1109/jphot.2016.2639052 Salehi A, Fu X, Shin DH, So F (2019) Recent advances in OLED optical design. Adv Funct Mater. https://doi.org/10.1002/adfm.201808803 Im Y, Byun SY, Kim JH, Lee DR, Oh CS, Yook KS, Lee JY (2017) Recent progress in high-efficiency blue-light-emitting materials for organic light-emitting diodes. Adv Funct Mater. https://doi.org/10.1002/adfm.201603007 Wu JB, Agrawal M, Becerril HA, Bao ZN, Liu ZF, Chen YS, Peumans P (2009) Organic light-emitting diodes on solution-processed graphene transparent electrodes. ACS Nano. https://doi.org/10.1021/nn900728d Frobel M, Fries F, Schwab T, Lenk S, Leo K, Gather MC, Reineke S (2018) Three-terminal RGB full-color OLED pixels for ultrahigh density displays. Sci Rep. https://doi.org/10.1038/s41598-018-27976-z Nam S, Chaudhry MU, Tetzner K, Pearson C, Groves C, Petty MC, Anthopoulos TD, Bradley DDC (2019) Efficient and stable solution-processed organic light-emitting transistors using a high-k dielectric. ACS Photonics. https://doi.org/10.1021/acsphotonics.9b01265 Krotkus S, Kasemann D, Lenk S, Leo K, Reineke S (2016) Adjustable white-light emission from a photo-structured micro-OLED array. Light Sci Appl. https://doi.org/10.1038/lsa.2016.121 Liu Z, Lin CH, Hyun BR, Sher CW, Lv Z, Luo B, Jiang F, Wu T, Ho CH, Kuo HC, He JH (2020) Micro-light-emitting diodes with quantum dots in display technology. Light Sci Appl. https://doi.org/10.1038/s41377-020-0268-1 Wong MS, Hwang D, Alhassan AI, Lee C, Ley R, Nakamura S, DenBaars SP (2018) High efficiency of III-nitride micro-light-emitting diodes by sidewall passivation using atomic layer deposition. Opt Express. https://doi.org/10.1364/OE.26.021324 Cho I, Sim YC, Cho M, Cho YH, Park I (2020) Monolithic micro light-emitting diode/metal oxide nanowire gas sensor with microwatt-level power consumption. ACS Sens. https://doi.org/10.1021/acssensors.9b02487 Ho SJ, Hsu HC, Yeh CW, Chen HS (2020) Inkjet-printed salt-encapsulated quantum dot film for UV-based RGB color-converted micro-light emitting diode displays. ACS Appl Mater Interfaces. https://doi.org/10.1021/acsami.0c05646 Kang JH, Li B, Zhao T, Johar MA, Lin CC, Fang YH, Kuo WH, Liang KL, Hu S, Ryu SW, Han J (2020) RGB arrays for micro-light-emitting diode applications using nanoporous GaN embedded with quantum dots. ACS Appl Mater Interfaces. https://doi.org/10.1021/acsami.0c00839 Hwang D, Mughal A, Pynn CD, Nakamura S, DenBaars SP (2017) Sustained high external quantum efficiency in ultrasmall blue III–nitride micro-LEDs. Appl Phys Express. https://doi.org/10.7567/apex.10.032101 Mei W, Zhang Z, Zhang A, Li D, Zhang X, Wang H, Chen Z, Li Y, Li X, Xu X (2020) High-resolution, full-color quantum dot light-emitting diode display fabricated via photolithography approach. Nano Res. https://doi.org/10.1007/s12274-020-2883-9 Wu T, Sher C-W, Lin Y, Lee C-F, Liang S, Lu Y, Huang Chen S-W, Guo W, Kuo H-C, Chen Z (2018) Mini-LED and micro-LED: promising candidates for the next generation display technology. Appl Sci. https://doi.org/10.3390/app8091557 Gou F, Hsiang EL, Tan G, Chou PT, Li YL, Lan YF, Wu ST (2019) Angular color shift of micro-LED displays. Opt Express. https://doi.org/10.1364/OE.27.00A746 Xu Y, Cui J, Hu Z, Gao X, Wang L (2021) Pixel crosstalk in naked-eye micro-LED 3D display. Appl Opt. https://doi.org/10.1364/AO.429975 Hsiang EL, Yang Q, He Z, Zou J, Wu ST (2020) Halo effect in high-dynamic-range mini-LED backlit LCDs. Opt Express. https://doi.org/10.1364/OE.413133 Tan G, Huang Y, Li MC, Lee SL, Wu ST (2018) High dynamic range liquid crystal displays with a mini-LED backlight. Opt Express. https://doi.org/10.1364/OE.26.016572 He Z, Yin K, Hsiang E-L, Wu S-T (2020) Volumetric light-shaping polymer-dispersed liquid crystal films for mini-LED backlights. Liquid Cryst. https://doi.org/10.1080/02678292.2020.1735546 Deng ZL, Zhang S, Wang GP (2016) A facile grating approach towards broadband, wide-angle and high-efficiency holographic metasurfaces. Nanoscale. https://doi.org/10.1039/c5nr07181j Estakhri NM, Neder V, Knight MW, Polman A, Alù A (2017) Visible light, wide-angle graded metasurface for back reflection. ACS Photonics. https://doi.org/10.1021/acsphotonics.6b00965 Spagele C, Tamagnone M, Kazakov D, Ossiander M, Piccardo M, Capasso F (2021) Multifunctional wide-angle optics and lasing based on supercell metasurfaces. Nat Commun. https://doi.org/10.1038/s41467-021-24071-2 Qiu T, Luo B, Ali F, Jaatinen E, Wang L, Wang H (2016) Metallic nanomesh with disordered dual-size apertures as wide-viewing-angle transparent conductive electrode. ACS Appl Mater Interfaces. https://doi.org/10.1021/acsami.6b08173 Liu YL, Yu CC, Fang CY, Chen HL, Chen CW, Kuo CC, Chang CK, Chen LC, Chen KH (2013) Using optical anisotropy as a quality factor to rapidly characterize structural qualities of large-area graphene films. Anal Chem. https://doi.org/10.1021/ac302863w Lee H-J, Park G-H, So J-S, Kim J-H, Kim H-G, Kwac L-K (2020) Synthesis of highly transparent titanium-indium-tin oxide conductive film at low temperatures for application in near-infrared devices. Infrared Phys Technol. https://doi.org/10.1016/j.infrared.2020.103532 Hu R, Luo XB, Zheng H, Qin Z, Gan ZQ, Wu BL, Liu S (2012) Design of a novel freeform lens for LED uniform illumination and conformal phosphor coating. Opt Express. https://doi.org/10.1364/oe.20.013727 Lin S, Yu J, Cai J, Chen E, Xu S, Ye Y, Guo T (2019) Design of a freeform lens array based on an adjustable Cartesian candela distribution. J Mod Opt. https://doi.org/10.1080/09500340.2019.1687773 Ye Z-T, Chang C, Juan M-C, Chen K-J (2020) Luminous intensity field optimization for antiglare LED desk lamp without second optical element. Appl Sci. https://doi.org/10.3390/app10072607 Ye ZT, Ruan M, Kuo H-C (2020) CSP-LEDs combined with light guide without reflective matrix for antiglare design. IEEE Access. https://doi.org/10.1109/access.2020.3019314 Zhang J, Chang L, Zhao Z, Tian K, Chu C, Zheng Q, Zhang Y, Li Q, Zhang Z-H (2021) Different scattering effect of nano-patterned sapphire substrate for TM- and TE-polarized light emitted from AlGaN-based deep ultraviolet light-emitting diodes. Opt Mater Express. https://doi.org/10.1364/ome.416605 Lan S, Wan H, Zhao J, Zhou S (2019) Light extraction analysis of AlGaInP based red and GaN based blue/green flip-chip micro-LEDs using the Monte Carlo Ray tracing method. Micromachines. https://doi.org/10.3390/mi10120860 Ye ZT, Cheng YH, Hung LW, Hsu KH, Hu YC (2021) Light guide layer thickness optimization for enhancement of the light extraction efficiency of ultraviolet light-emitting diodes. Nanoscale Res Lett. https://doi.org/10.1186/s11671-021-03563-6 Wang YJ, Lu JG, Chao WC, Shieh HP (2015) Switchable viewing angle display with a compact directional backlight and striped diffuser. Opt Express. https://doi.org/10.1364/OE.23.021443 Li L, Escuti MJ (2021) Super achromatic wide-angle quarter-wave plates using multi-twist retarders. Opt Express. https://doi.org/10.1364/OE.418197 Yang L, Sang X, Yu X, Yan B, Wang K, Yu C (2019) Viewing-angle and viewing-resolution enhanced integral imaging based on time-multiplexed lens stitching. Opt Express. https://doi.org/10.1364/OE.27.015679 Chen LC, Tien CH, Chen DF, Ye ZT, Kuo HC (2019) High-uniformity planar mini-chip-scale packaged LEDs with quantum dot converter for white light source. Nanoscale Res Lett. https://doi.org/10.1186/s11671-019-2993-z Sheppard CJR (1995) Approximate calculation of the reflection coefficient from a stratified medium. Pure Appl Opt J Eur Opt Soc Part A. https://doi.org/10.1088/0963-9659/4/5/018 This work partially financially was supported by TO2M Corporation, Hsinchu 30010, Taiwan. This work partially financially was supported by Ministry of Science and Technology, the Republic of China (Grant No. MOST 110-2221-E-194-036 and 110-2218-E-002-032-MBK), and the Advanced Institute of Manufacturing with High-Tech Innovations from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education in Taiwan. Department of Mechanical Engineering, Advanced Institute of Manufacturing with High-Tech Innovations, National Chung Cheng University, 168, University Rd., Min-Hsiung, Chia-Yi, 62102, Taiwan Zhi-Ting Ye & Chia-Hui Chen Department of R&D, General Manager's Office, TO2M Corporation, Hsinchu, 30010, Taiwan Wen-Tsung Ho Zhi-Ting Ye Chia-Hui Chen ZTY and WTH designed the experiments. CHC analyzed data and wrote the draft manuscript. ZTY and WTH discussed the results and contributed to the writing of the manuscript. All authors read and approved the final manuscript. Correspondence to Zhi-Ting Ye. Ye, ZT., Ho, WT. & Chen, CH. Highly Reflective Thin-Film Optimization for Full-Angle Micro-LEDs. Nanoscale Res Lett 16, 152 (2021). https://doi.org/10.1186/s11671-021-03611-1 Micro-LEDs Secondary optical lens Full-angle Primary optical design
CommonCrawl
Trig Graphs (deg) Intro to sin(x), cos(x) and tan(x) Key features of sine and cosine curves Amplitude of sine and cosine Period changes for sine and cosine Phase shifts for sine and cosine Transformations of sine and cosine curves and equations Consider the functions $f\left(x\right)=\sin x$f(x)=sinx and $g\left(x\right)=\sin5x$g(x)=sin5x. State the period of $f\left(x\right)$f(x) in degrees. Complete the table of values for $g\left(x\right)$g(x). $x$x $0^\circ$0° $18^\circ$18° $36^\circ$36° $54^\circ$54° $72^\circ$72° $90^\circ$90° $108^\circ$108° $126^\circ$126° $144^\circ$144° $g\left(x\right)$g(x) $\editable{}$ State the period of $g\left(x\right)$g(x)in degrees. What transformation of the graph of $f\left(x\right)$f(x) results in the graph of $g\left(x\right)$g(x)? Horizontal enlargement by a factor of $5$5. Horizontal enlargement by a factor of $\frac{1}{5}$15​. Vertical enlargement by a factor of $5$5. Vertical enlargement by a factor of $\frac{1}{5}$15​. The graph of $f\left(x\right)$f(x) has been provided below. By moving the points, graph $g\left(x\right)$g(x). Approx 6 minutes Consider the functions $f\left(x\right)=\cos x$f(x)=cosx and $g\left(x\right)=\cos4x$g(x)=cos4x. Consider the function $f\left(x\right)=\cos x$f(x)=cosx and $g\left(x\right)=\cos\left(\frac{x}{2}\right)$g(x)=cos(x2​). The functions $f\left(x\right)$f(x) and $g\left(x\right)=f\left(kx\right)$g(x)=f(kx) have been graphed on the same set of axes, in grey and black respectively.
CommonCrawl
OSA Publishing > Optica > Volume 6 > Issue 12 > Page 1515 Prem Kumar, Editor-in-Chief Single-frame wide-field nanoscopy based on ghost imaging via sparsity constraints Wenwen Li, Zhishen Tong, Kang Xiao, Zhentao Liu, Qi Gao, Jing Sun, Shupeng Liu, Shensheng Han, and Zhongyang Wang Wenwen Li,1,3 Zhishen Tong,2,4 Kang Xiao,1,5 Zhentao Liu,2 Qi Gao,1 Jing Sun,1 Shupeng Liu,5 Shensheng Han,2,6 and Zhongyang Wang1,7 1Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China 2Key Laboratory for Quantum Optics and Center for Cold Atom Physics of CAS, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China 3School of Microelectronics, University of Chinese Academy of Sciences, Beijing 100049, China 4Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China 5Shanghai University, Shanghai 200444, China 6e-mail: [email protected] 7e-mail: [email protected] Zhentao Liu https://orcid.org/0000-0002-4453-8043 W Li Z Tong K Xiao Z Liu Q Gao J Sun S Liu S Han Z Wang Vol. 6, pp. 1515-1523 •https://doi.org/10.1364/OPTICA.6.001515 Wenwen Li, Zhishen Tong, Kang Xiao, Zhentao Liu, Qi Gao, Jing Sun, Shupeng Liu, Shensheng Han, and Zhongyang Wang, "Single-frame wide-field nanoscopy based on ghost imaging via sparsity constraints," Optica 6, 1515-1523 (2019) Speckle patterns Structured illumination microscopy X ray imaging Revised Manuscript: November 2, 2019 Manuscript Accepted: November 2, 2019 Single-molecule, localization-based, wide-field nanoscopy often suffers from low time resolution because the localization of a single molecule with high precision requires a low emitter density of fluorophores. In addition, to reconstruct a super-resolution image, hundreds or thousands of image frames are required, even when advanced algorithms, such as compressive sensing and deep learning, are applied. These factors limit the application of these nanoscopy techniques for living cell imaging. In this study, we developed a single-frame, wide-field nanoscopy system based on ghost imaging via sparsity constraints (GISC), in which a spatial random phase modulator is applied in a wide-field microscope to achieve random measurement of fluorescence signals. This method can effectively use the sparsity of fluorescence emitters to enhance the imaging resolution to 80 nm by reconstructing one raw image using compressive sensing. We achieved an ultrahigh emitter density of ${143}\;\unicode{x00B5} {{\rm m}^{ - 2}}$ while maintaining the precision of single-molecule localization below 25 nm. We show that by employing a high-density of photo-switchable fluorophores, GISC nanoscopy can reduce the number of sampling frames by one order of magnitude compared to previous super-resolution imaging methods based on single-molecule localization. GISC nanoscopy may therefore improve the time resolution of super-resolution imaging for the study of living cells and microscopic dynamic processes. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Multiple-input ghost imaging via sparsity constraints Wenlin Gong and Shensheng Han J. Opt. Soc. Am. A 29(8) 1571-1579 (2012) Fast stimulated emission nanoscopy based on single molecule localization Xuehua Wang, Danni Chen, Bin Yu, and Hanben Niu Sparsity-based super-resolution microscopy from correlation information Oren Solomon, Maor Mutzafi, Mordechai Segev, and Yonina C. Eldar Fast compressed sensing analysis for super-resolution imaging using L1-homotopy Hazen P. Babcock, Jeffrey R. Moffitt, Yunlong Cao, and Xiaowei Zhuang Experimental investigation of the quality of ghost imaging via sparsity constraints Wenlin Gong, Zunwang Bo, Enrong Li, and Shensheng Han Y. M. Sigal, R. Zhou, and X. Zhuang, "Visualizing and discovering cellular structures with super-resolution microscopy," Science 361, 880–887 (2018). S. W. Hell and J. Wichmann, "Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy," Opt. Lett. 19, 780–782 (1994). M. Hofmann, C. Eggeling, S. Jakobs, and S. W. Hell, "Breaking the diffraction barrier in fluorescence microscopy at low light intensities by using reversibly photoswitchable proteins," Proc. Natl. Acad. Sci. USA 102, 17565–17569 (2005). M. G. Gustafsson, "Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution," Proc. Natl. Acad. Sci. USA 102, 13081–13086 (2005). S. T. Hess, T. P. Girirajan, and M. D. Mason, "Ultra-high resolution imaging by fluorescence photoactivation localization microscopy," Biophys. J. 91, 4258–4272 (2006). M. J. Rust, M. Bates, and X. Zhuang, "Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)," Nat. Methods 3, 793–796 (2006). A. Sharonov and R. M. Hochstrasser, "Wide-field sub-diffraction imaging by accumulated binding of diffusing probes," Proc. Natl. Acad. Sci. USA 103, 18911–18916 (2006). T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, "Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI)," Proc. Natl. Acad. Sci. USA 106, 22287–22292 (2009). E. A. Mukamel and M. J. Schnitzer, "Unified resolution bounds for conventional and stochastic localization fluorescence microscopy," Phys. Rev. Lett. 109, 168102 (2012). J. Schneider, J. Zahn, M. Maglione, S. J. Sigrist, J. Marquard, J. Chojnacki, and S. W. Hell, "Ultrafast, temporally stochastic STED nanoscopy of millisecond dynamics," Nat. Methods 12, 827–830 (2015). P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, "Super-resolution video microscopy of live cells by structured illumination," Nat. Methods 6, 339–342 (2009). S. H. Shim, C. Xia, G. Zhong, H. P. Babcock, and X. Zhuang, "Super-resolution fluorescence imaging of organelles in live cells with photoswitchable membrane probes," Proc. Natl. Acad. Sci. USA 109, 13978–13983 (2012). F. Huang, T. M. P. Hartwich, F. E. Rivera-Molina, Y. Lin, W. C. Duim, J. J. Long, P. D. Uchil, J. R. Myers, M. A. Baird, W. Mothes, M. W. Davidson, D. Toomre, and J. Bewersdorf, "Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms," Nat. Methods 10, 653–658 (2013). S. J. Holden, S. Uphoff, and A. N. Kapanidis, "DAOSTORM: an algorithm for high- density super-resolution microscopy," Nat. Methods 8, 279–280 (2011). L. Zhu, W. Zhang, D. Elnatan, and B. Huang, "Faster storm using compressed sensing," Nat. Methods 9, 721–723 (2012). E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, "Deep-STORM: super-resolution single-molecule microscopy by deep learning," Optica 5, 458–464 (2018). S. Hugelier, J. J. de Rooi, R. Bernex, S. Duwé, O. Devos, M. Sliwa, P. Dedecker, P. H. C. Eilers, and C. Ruckebusch, "Sparse deconvolution of high-density super-resolution images," Sci. Rep. 6, 21413 (2016). S. Oren, M. Maor, S. Mordechai, and Y. C. Eldar, Sparsity-based super-resolution microscopy from correlation information," Opt. Express 26, 18238–18269 (2018). M. Mutzafi, Y. Shechtman, Y. C. Eldar, and M. Segev, "Single-shot sparsity-based sub-wavelength fluorescence imaging of biological structures using dictionary learning," in Conference on Lasers and. Electro-Optics (CLEO) (2015), Vol. 3, paper STh4K.5. E. J. Candès and M. B. Wakin, "An introduction to compressive sampling," IEEE Signal Process. Mag. 25(2), 21–30 (2008). E. J. Candès and J. Romberg, "Sparsity and incoherence in compressive sampling," Inverse Probl. 23, 969–985 (2007). E. J. Candès and C. Fernandez-Granda, "Towards a mathematical theory of super-resolution," Commun. Pure Appl. Math. 67, 906–956 (2014). J. H. Shapiro and R. W. Boyd, "The physics of ghost imaging," Quantum Inf. Process. 11, 949–993 (2012). M. I. Kolobov, Quantum Imaging (Springer, 2007), pp. 79–110. Y. Shih, "The physics of ghost imaging: nonlocal interference or local intensity fluctuation correlation?" Quantum Inf. Process. 11, 995–1001 (2012). S. Han, H. Yu, X. Shen, H. Liu, W. Gong, and Z. Liu, "A review of ghost imaging via sparsity constraints," Appl. Sci. 8, 1379 (2018). D. Zhang, Y. H. Zhai, L. A. Wu, and X. H. Chen, "Correlated two-photon imaging with true thermal light," Opt. Lett. 30, 2354–2356 (2005). X. F. Liu, X. H. Chen, X. R. Yao, W. K. Yu, G. J. Zhai, and L. A. Wu, "Lensless ghost imaging with sunlight," Opt. Lett. 39, 2314–2317 (2014). Z. Liu, J. Wu, E. Li, X. Shen, and S. Han, "Spectral camera based on ghost imaging via sparsity constraints," Sci. Rep. 6, 25718 (2016). W. Gong and S. Han, "Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints," Phys. Lett. A 376, 1519–1522 (2012). W. Gong and S. Han, "Super-resolution ghost imaging via compressive sampling reconstruction," arXiv:0910.4823 (2009). H. Wang, S. Han, and M. I. Kolobov, "Quantum limits of super-resolution of optical sparse objects via sparsity constraint," Opt. Express 20, 23235–23252 (2012). C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, "Ghost imaging lidar via sparsity constraints," Appl. Phys. Lett. 101, 141123 (2012). J. Cheng and S. Han, "Incoherent coincidence imaging and its applicability in x-ray diffraction," Phys. Rev. Lett. 92, 093903 (2004). P. Zhang, W. Gong, X. Shen, D. Huang, and S. Han, "Improving resolution by the second-order correlation of light fields," Opt. Lett. 34, 1222–1224 (2009). K. Kyrus and C. K. W. Clifford, "High-order ghost imaging using non-Rayleigh speckle sources," Opt. Express 24, 26766 (2016). S. C. Park, M. K. Park, and M. G. Kang, "Super-resolution image reconstruction: a technical overview," IEEE Signal Process. Mag. 20(3), 21–36 (2003). M. A. Figueiredo, T. R. D. Nowak, and S. J. Wright, "Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems," IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007). J. A. Tropp and A. C. Gilbert, "Signal recovery from random measurements via orthogonal matching pursuit," IEEE Trans. Inf. Theory 53, 4655–4666 (2007). J. Wang, "Support recovery with orthogonal matching pursuit in the presence of noise: a new analysis," IEEE Trans. Signal Process. 63, 5868–5877 (2015). J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Company, 2007). M. Ovesný, P. Křížek, J. Borkovec, Z. Švindrych, and G. M. Hagen, "ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging," Bioinformatics 30, 2389–2390 (2014). Babcock, H. P. Baird, M. A. Bernex, R. Boyd, R. W. Candès, E. J. Chen, M. Chen, X. H. Chhun, B. B. Chojnacki, J. Clifford, C. K. W. Colyer, R. de Rooi, J. J. Dedecker, P. Dertinger, T. Devos, O. Duim, W. C. Duwé, S. Eggeling, C. Eilers, P. H. C. Eldar, Y. C. Elnatan, D. Fernandez-Granda, C. Figueiredo, M. A. Gilbert, A. C. Gong, W. Goodman, J. W. Griffis, E. R. Gustafsson, M. G. Gustafsson, M. G. L. Han, S. Hartwich, T. M. P. Hell, S. W. Hochstrasser, R. M. Holden, S. J. Huang, D. Huang, F. Hugelier, S. Iyer, G. Jakobs, S. Kang, M. G. Kapanidis, A. N. Kner, P. Kolobov, M. I. Kyrus, K. Li, E. Lin, Y. Liu, H. Liu, X. F. Liu, Z. Long, J. J. Maglione, M. Maor, M. Marquard, J. Michaeli, T. Mordechai, S. Mothes, W. Mukamel, E. A. Mutzafi, M. Myers, J. R. Nehme, E. Nowak, T. R. D. Oren, S. Ovesný, M. Park, M. K. Park, S. C. Rivera-Molina, F. E. Romberg, J. Ruckebusch, C. Schneider, J. Schnitzer, M. J. Segev, M. Shapiro, J. H. Sharonov, A. Shechtman, Y. Shen, X. Shih, Y. Shim, S. H. Sigal, Y. M. Sigrist, S. J. Sliwa, M. Toomre, D. Tropp, J. A. Uchil, P. D. Uphoff, S. Wakin, M. B. Wang, J. Weiss, L. E. Weiss, S. Wichmann, J. Winoto, L. Wright, S. J. Wu, L. A. Xia, C. Xu, W. Yao, X. R. Yu, H. Yu, W. K. Zahn, J. Zhai, G. J. Zhai, Y. H. Zhang, D. Zhang, P. Zhang, W. Zhao, C. Zhong, G. Zhou, R. Zhu, L. Appl. Phys. Lett. (1) Appl. Sci. (1) Commun. Pure Appl. Math. (1) IEEE J. Sel. Top. Signal Process. (1) IEEE Signal Process. Mag. (2) IEEE Trans. Inf. Theory (1) IEEE Trans. Signal Process. (1) Inverse Probl. (1) Phys. Lett. A (1) Proc. Natl. Acad. Sci. USA (5) Quantum Inf. Process. (2) Sci. Rep. (2) Fig. 1. Schematic diagram of experimental setup and imaging process. (a) Experimental setup: A random phase modulator with a low magnification objective (${10} \times $) is set before sCMOS in a conventional inverted microscope to form speckle patterns of fluorescence signals. Fluorescence images are directly recorded. RPM, Random phase modulator; L, Lens; I, Iris; M, Mirror; DM, Dichroic mirror; EX, Excitation filter; EM, Emission filter; PBS, Polarization beam splitter; OL, Objective lens; and NP, Nanopositioning stage. (b) Calibration and imaging processes: All speckle patterns generated from each position of the sample plane are recorded as the random measurement matrix in the calibration process. One speckle image from the actual imaged sample is obtained in the imaging process and then a super-resolution image can be reconstructed via compressive sensing. Fig. 2. Imaging resolution and its influencing factors for GISC nanoscopy. (a) Normalized mutual correlation curve to compare the performance for the spatial resolution of the speckle pattern and PSF. The resolution of speckles can be increased by a factor of $\sqrt 2 $ compared to that of the PSF. (b) Diffraction limited rings with different spacings (80, 120, and 240 nm) are created and the single-frame imaging results reconstructed by the GPSR algorithm are shown in the middle column; the scale bar is 500 nm long, corresponding to the wide-field images shown in left column (160 nm/pixel); the spatial resolution of 80 nm is determined by the Rayleigh criterion shown in the right column; and the black and red lines represent the normalized intensity distribution along the blue line in the wide field and the reconstructed images, respectively. (c) Effect of different SNRs and Cs on the resolution. The inset images show the reconstruction images at ${\rm SNR} = {20}$ and $C = {0.75}$, and ${\rm SNR} = {10}$ and $C = {0.75}$; the right image represents the threshold for accurate recovery using compressive sensing; and the left image and the dashed lines indicate the results and the extent of inaccurate recovery using compressive sensing, respectively. Fig. 3. Capability of GISC nanoscopy to identify molecules efficiently at a high density and the effect of the SNR. (a) Comparison of reconstruction results (white grids) to the true molecule positions (red crosses) at ${50.7}\,\,\unicode{x00B5} {{\rm m}^{ - 2}}$; scar bar of 100 nm. The smaller figure (upper left corner) corresponds to the wide-field image (160 nm/pixel). (b) Density of identified molecules and localization precision versus different densities and different SNRs obtained with the OMP localization algorithm ($C = {0.75}$). Fig. 4. Experimental results of nanometer rulers. (a) Reconstruction results of the images of the 270 (upper row) and 160 (lower row) nm rulers at ${\rm SNR} = {20}$ and $C = {0.75}$ and the corresponding wide-field images; the yellow boxes represent the reconstruction results of diffraction-limited rulers, corresponding to "+" in the wide-field images. The scale of the wide-field images is 160 nm per pixel; the scale bars in the other images indicate 500 nm. (b) Histogram and normalized intensity distribution along the white line in the reconstructed images of the 270 (upper row) and 160 (lower row) nm rulers. (c) Statistical analysis of accuracy for 40 reconstructed images of the 270 (upper row) and 160 (lower row) nm rulers. Fig. 5. Simulation and experimental results that demonstrate the capability of GISC-STORM to identify molecules efficiently at high densities. (a) Comparison of the performances of GISC-STORM, CS-STORM, and the single-molecule fitting method for molecule identification. The dashed line indicates the case where the number of identified molecules equals the number of molecules present. (b) Comparison of localization precisions. (c) Reconstruction results of the ring with a spacing of 60 nm from 4000, 500, and 10 frames by the single-molecule fitting, CS-STORM, and GISC-STORM, respectively. (d) Reconstructed result (upper right corner) and histogram for one representative DNA origami structures at a designed distance of 40 nm from 100 frames. The histogram shows the accumulated intensity of the red line in the inset image (20 nm/pixel). Scale bar: 500 nm. (1) Δ G ( 2 ) ( r i , j ′ ) = ⟨ I r ( r i , j ′ ) I t ( r i , j ′ ) ⟩ ≈ 1 M i ′ M j ′ ∑ i ′ = 1 M i ′ ∑ j ′ = 1 M j ′ I ( i ′ , j ′ ) r ( i , j ) I ( i ′ , j ′ ) t ∝ { T i ( r i , j ′ ) ⊗ { { ( π a 2 λ f ) [ 2 J 1 ( 2 π a λ f r i , j ′ ) 2 π a λ f r i , j ′ ] } ⊗ exp ⁡ { − 2 [ 2 π ω ( n − 1 ) / 2 π ω ( n − 1 ) λ λ ] 2 { 1 − exp ⁡ [ − ( z 2 r i , j ′ ( z 1 + z 2 ) ζ ) 2 ] } } } 2 } , (2) m i n ‖ x ‖ 0 s . t . Y = A X , (3) M ≥ C 0 ⋅ | K | ⋅ μ 2 ( A ) ⋅ log ⁡ ( N / δ ) , (4) ρ e r r o r ≤ C κ 2 δ 2 K 1 / 2 i f S N R ≥ κ δ 2 K 3 / 4 ,
CommonCrawl
Great paper by the founders of Smoothed Analysis! Some nice quotes: - "if one can prove that an algorithm performs well in the worst case, then one can be confident that it will work well in every domain. However, there are many algorithms that work well in practice that do not work well in the worst case. Smoothed analysis provides a theoretical framework for explaining why some of these algorithms do work well in practice." - "The performance profiles of algorithms across the landscape of input instances can differ greatly and can be quite irregular." - "If a single input instance triggers an exponential run time, the algorithm is called an exponential-time algorithm." - "While polynomial time algorithms are usually viewed as being efficient, we clearly prefer those whose run time is a polynomial of low degree, especially those that run in nearly linear time." - "Developing means for predicting the performance of algorithms and heuristics on real data and on real computers is a grand challenge in algorithms." - "We hope that theoretical explanations will be found for the success in practice of many of these algorithms, and that these theories will catalyze better algorithm design." - "there are many problems that need to be solved in practice for which we do not know algorithms with good worst-case performance. Instead, scientists and engineers typically use heuristic algorithms to solve these problems. Many of these algorithms work well in practice, in spite of having a poor, sometimes exponential, worst-case running time. Practitioners justify the use of these heuristics by observing that worst-case instances are usually not "typical" and rarely occur in practice. The worst-case analysis can be too pessimistic." - "heuristics are often used to speed up the practical performance of implementations that are based on algorithms with polynomial worst-case complexity. These heuristics might in fact worsen the worst-case performance, or make the worst-case complexity difficult to analyze." - "While one would ideally choose the distribution of inputs that occur in practice, this is difficult as it is rare that one can determine or cleanly express these distributions, and the distributions can vary greatly between one application and another. Instead, average-case analyses have employed distributions with concise mathematical descriptions, such as Gaussian random vectors, uniform {0, 1} vectors, and Erdos-Renyi random graphs. The drawback of using such distributions is that the inputs actually encountered in practice may bear very little resemblance to the inputs that are likely to be generated by such distributions." - "Because of the intrinsic difficulty in defining practical distributions, we consider an alternative approach to modeling real data. The basic idea is to identify typical properties of practical data, define an input model that captures these properties, and then rigorously analyze the performance of algorithms assuming their inputs have these properties. Smoothed analysis is a step in this direction. It is motivated by the observation that practical data are often subject to some small degree of random noise." - "At a high level, each input is generated from a two-stage model: In the first stage, an instance is generated and in the second stage, the instance from the first stage is slightly perturbed. The perturbed instance is the input to the algorithm." - "we hope insights gained from smoothed analysis will lead to new ideas in algorithm design [...] we suggest that it might be possible to solve some problems more efficiently by perturbing their inputs." I'm in love with this paper! ### Non-worst-case analysis: In practice, "we are not interested in all problem instances, but only in those which can actually occur in reality." "The notion of stability [...] is a concrete way to formalize the notion that the only instances of interest are those for which small perturbation in the data (which may reflect e.g. some measurement errors) do not change the optimal partition of the graph." This Stable Analysis is different from "Smoothed Analysis" where "one shows that the hard instances form a discrete and isolated subset of the input space". ### Open problems: **Conjecture:** There exists some constant $\gamma^{*}$ such that $\gamma^{*}$-stable instances can be solved in polynomial time. **Question:** it is shown that $\gamma$-stable instances, with $\gamma>\sqrt{\Delta n}$, can be solved in polynomial time. Can this be improved (without further assumptions such as a lower bound on the minimum degree)? As $\sqrt{\Delta n}$ is usually large, this may not be useful in practice. **Question:** How does the algorithm "FindMaxCut" (page 6) perform in practice on real-world instances??? **Question:** How about the greedy heuristic?: Start from a random cut and do passes on the nodes moving each node to the other side of the cut if the size of the cut increases until convergence. Does it have some guarantee on $\gamma$-stable instances? ### Extended spectral clustering: "Let D be a diagonal matrix. Think of W + D as the weighted adjacency matrix of a graph, with loops added. Such loops do not change the weight of any cut, so that regardless of what D we choose, a cut is maximal in W iff it is maximal in W + D. Furthermore, it is not hard to see that W is $\gamma$-stable, iff W + D is. Our approach is to first find a "good" D, and then take the spectral partitioning of W + D as the maximal cut. These observations suggest the following question: Is it true that for every $\gamma$-stable instance W with γ large enough there exists a diagonal D for which extended spectral partitioning solves Max-Cut? If so, can such a D be found efficiently? Below we present certain sufficient conditions for these statements." I did not fully understand what is presented below that paragraph. Let G be a $\gamma$-stable graph, how do I get $D$? ### Goemans-Williamson algorithm: The approximation guarantee of the Goemans-Williamson algorithm is better on $\gamma$-stable instances than in general. ### Random model: With high probability, the extended specral clustering leads to the optimal cut on $\gamma$-stable instances generated from a certain random model for $\gamma\geq 1+\Omega(\sqrt{\frac{\log(n)}{n}})$. ### Typos: - page 3, Proposition 2.1: " A graph G graph" - page 4: "this follows from Definition 2.1", should be "Proposition 2.1". - page 5, Definition 2.2: should be "E" instead of "e" in the equation. - page 5: "which must to be on the" and "of the optional cut" -> "optimal". - page 8: "we multiply it be a PSD matrix" Nice paper building on top of [the WebGraph framework](https://papers-gamma.link/paper/31) and [Chierichetti et al.](https://papers-gamma.link/paper/126) to compress graphs. ### Approximation guarantee I read: "our algorithm is inspired by a theoretical approach with provable guarantees on the final quality, and it is designed to directly optimize the resulting compression ratio.". I misunderstood initially, but the proposed algorithm actually does not have any provable approximation guarantee other than the $\log(n)$ one (which is also obtained by a random ordering of the nodes). Designing an algorithm with (a better) approximation guarantee for minimizing "MLogA", "MLogGapA" or "BiMLogA" seems to be a nice open problem. ### Objectives Is there any better objective than "MLogA", "MLogGapA" or "BiMLogA" to have a proxy of the compression obtained by the BV-framework? Is it possible to directly look for an ordering that minimizes the size of the output of BV compression algorithm?
CommonCrawl
A selection theorem in metric trees by A. G. Aksoy and M. A. Khamsi PDF In this paper, we show that nonempty closed convex subsets of a metric tree enjoy many properties shared by convex subsets of Hilbert spaces and admissible subsets of hyperconvex spaces. Furthermore, we prove that a set-valued mapping $T^*$ of a metric tree $M$ with convex values has a selection $T: M\rightarrow M$ for which $d(T(x),T(y))\leq d_H(T^*(x),T^*(y))$ for each $x,y \in M$. Here by $d_H$ we mean the Hausdroff distance. Many applications of this result are given. A. G. Aksoy, M. A. Khamsi, Fixed points of uniformly Lipschitzian mappings in metric trees. Preprint. A. G. Aksoy, B. Maurizi, Metric trees hyperconvex hulls and Extensions. submitted. N. Aronszajn and P. Panitchpakdi, Extension of uniformly continuous transformations and hyperconvex metric spaces, Pacific J. Math. 6 (1956), 405–439. MR 84762, DOI 10.2140/pjm.1956.6.405 Jean-Bernard Baillon, Nonexpansive mapping and hyperconvex spaces, Fixed point theory and its applications (Berkeley, CA, 1986) Contemp. Math., vol. 72, Amer. Math. Soc., Providence, RI, 1988, pp. 11–19. MR 956475, DOI 10.1090/conm/072/956475 I. Bartolini, P. Ciaccia, and M. Patella, String matching with metric trees using approximate distance. SPIR, Lecture Notes in Computer Science, Springer Verlag, Vol. 2476 (2002), 271–283. Yoav Benyamini and Joram Lindenstrauss, Geometric nonlinear functional analysis. Vol. 1, American Mathematical Society Colloquium Publications, vol. 48, American Mathematical Society, Providence, RI, 2000. MR 1727673, DOI 10.1090/coll/048 Mladen Bestvina, $\Bbb R$-trees in topology, geometry, and group theory, Handbook of geometric topology, North-Holland, Amsterdam, 2002, pp. 55–91. MR 1886668 Leonard M. Blumenthal, Theory and applications of distance geometry, 2nd ed., Chelsea Publishing Co., New York, 1970. MR 0268781 Martin R. Bridson and André Haefliger, Metric spaces of non-positive curvature, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319, Springer-Verlag, Berlin, 1999. MR 1744486, DOI 10.1007/978-3-662-12494-9 Peter Buneman, A note on the metric properties of trees, J. Combinatorial Theory Ser. B 17 (1974), 48–50. MR 363963, DOI 10.1016/0095-8956(74)90047-1 Andreas W. M. Dress, Trees, tight extensions of metric spaces, and the cohomological dimension of certain groups: a note on combinatorial properties of metric spaces, Adv. in Math. 53 (1984), no. 3, 321–402. MR 753872, DOI 10.1016/0001-8708(84)90029-X Andreas Dress, Vincent Moulton, and Werner Terhalle, $T$-theory: an overview, European J. Combin. 17 (1996), no. 2-3, 161–175. Discrete metric spaces (Bielefeld, 1994). MR 1379369, DOI 10.1006/eujc.1996.0015 J. R. Isbell, Six theorems about injective metric spaces, Comment. Math. Helv. 39 (1964), 65–76. MR 182949, DOI 10.1007/BF02566944 Mohamed A. Khamsi, KKM and Ky Fan theorems in hyperconvex metric spaces, J. Math. Anal. Appl. 204 (1996), no. 1, 298–306. MR 1418536, DOI 10.1006/jmaa.1996.0438 Mohamed A. Khamsi and William A. Kirk, An introduction to metric spaces and fixed point theory, Pure and Applied Mathematics (New York), Wiley-Interscience, New York, 2001. MR 1818603, DOI 10.1002/9781118033074 M. A. Khamsi, W. A. Kirk, and Carlos Martinez Yañez, Fixed point and selection theorems in hyperconvex spaces, Proc. Amer. Math. Soc. 128 (2000), no. 11, 3275–3283. MR 1777578, DOI 10.1090/S0002-9939-00-05777-4 M. Amine Khamsi, Michael Lin, and Robert Sine, On the fixed points of commuting nonexpansive maps in hyperconvex spaces, J. Math. Anal. Appl. 168 (1992), no. 2, 372–380. MR 1175996, DOI 10.1016/0022-247X(92)90165-A W. A. Kirk, Hyperconvexity of $\mathbf R$-trees, Fund. Math. 156 (1998), no. 1, 67–72. MR 1610559 W. A. Kirk, Personal communication. J. C. Mayer, L. K. Mohler, L. G. Oversteegen, and E. D. Tymchatyn, Characterization of separable metric $\textbf {R}$-trees, Proc. Amer. Math. Soc. 115 (1992), no. 1, 257–264. MR 1124147, DOI 10.1090/S0002-9939-1992-1124147-5 John C. Mayer and Lex G. Oversteegen, A topological characterization of $\textbf {R}$-trees, Trans. Amer. Math. Soc. 320 (1990), no. 1, 395–415. MR 961626, DOI 10.1090/S0002-9947-1990-0961626-8 John W. Morgan, $\Lambda$-trees and their applications, Bull. Amer. Math. Soc. (N.S.) 26 (1992), no. 1, 87–112. MR 1100579, DOI 10.1090/S0273-0979-1992-00237-9 Frank Rimlinger, Free actions on $\textbf {R}$-trees, Trans. Amer. Math. Soc. 332 (1992), no. 1, 313–329. MR 1098433, DOI 10.1090/S0002-9947-1992-1098433-6 Charles Semple and Mike Steel, Phylogenetics, Oxford Lecture Series in Mathematics and its Applications, vol. 24, Oxford University Press, Oxford, 2003. MR 2060009 Robert Sine, Hyperconvexity and nonexpansive multifunctions, Trans. Amer. Math. Soc. 315 (1989), no. 2, 755–767. MR 954603, DOI 10.1090/S0002-9947-1989-0954603-6 J. Tits, A "theorem of Lie-Kolchin� for trees, Contributions to algebra (collection of papers dedicated to Ellis Kolchin), Academic Press, New York, 1977, pp. 377–388. MR 0578488 M. Zippin, Applications of Michael's continuous selection theorem to operator extension problems, Proc. Amer. Math. Soc. 127 (1999), no. 5, 1371–1378. MR 1487350, DOI 10.1090/S0002-9939-99-04777-2 Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 47H04, 47H10, 54H25, 47H09 Retrieve articles in all journals with MSC (2000): 47H04, 47H10, 54H25, 47H09 A. G. Aksoy Affiliation: Department of Mathematics, Claremont McKenna College, Claremont, California 91711 MR Author ID: 24095 Email: [email protected] M. A. Khamsi Affiliation: Department of Mathematical Sciences, University of Texas at El Paso, El Paso, Texas 79968-0514 Email: [email protected] Received by editor(s): April 27, 2005 Published electronically: May 1, 2006 Communicated by: Jonathan M. Borwein MSC (2000): Primary 47H04, 47H10, 54H25, 47H09
CommonCrawl
Effects of the weighting matrix on dynamic manipulability of robots Morteza Azad ORCID: orcid.org/0000-0003-3611-618X1, Jan Babič ORCID: orcid.org/0000-0002-1870-82642 & Michael Mistry3 Autonomous Robots volume 43, pages 1867–1879 (2019)Cite this article Dynamic manipulability of robots is a well-known tool to analyze, measure and predict a robot's performance in executing different tasks. This tool provides a graphical representation and a set of metrics as outcomes of a mapping from joint torques to the acceleration space of any point of interest of a robot such as the end-effector or the center of mass. In this paper, we show that the weighting matrix, which is included in the aforementioned mapping, plays a crucial role in the results of the dynamic manipulability analysis. Therefore, finding proper values for this matrix is the key to achieve reliable results. This paper studies the importance of the weighting matrix for dynamic manipulability of robots, which is overlooked in the literature, and suggests two physically meaningful choices for that matrix. We also explain three different metrics, which can be extracted from the graphical representations (i.e. ellipsoids) of the dynamic manipulability analysis. The application of these metrics in measuring a robot's physical ability to accelerate its end-effector in various desired directions is discussed via two illustrative examples. Avoid the most common mistakes and prepare your manuscript for journal editors. To build a high performance robot, design is probably the most important process which hugely influences the robot's performance. Designing a robot (i.e. determining the values of its design parameters such as mass and inertia distributions, dimensions, etc.) presets the limits of its abilities or in other words, its capabilities to perform certain tasks. If a robot is not well designed, no matter how advanced its controller is, it could end up in poor performance (Leavitt et al. 2004). In the other hand, if the design is "perfect", the larger range of feasible options would be available in the control space which makes it easier for the controller to achieve a desired task with higher performance. Also, in case of redundant robots, a certain task is achievable via various configurations in which physical abilities of the robot are different (Ajoudani et al. 2017). Therefore, in order to improve the robot's performance in different tasks and exploit its maximum abilities, it is desired to be able to compare different configurations of a robot and possibly to find the optimal one (e.g. in terms of torque/energy efficiency). This is completely intuitive since humans always try to exploit the redundancy in their limbs and also the environmental contacts to improve their performance while minimizing their efforts in executing various tasks. For example, usual human arms configurations are different while using screwdriver to tighten up a screw compared to while holding a mug. As already mentioned, finding (i) proper values for the design parameters, and (ii) best configuration for a robot in performing a certain task are the two important elements in making high performance robots and/or improving the performance of existing robots. Thus, it is beneficial to develop a unified and general metric which enables us to measure physical abilities of various robots in different configurations and different contact conditions. For this application, there exists a very famous metric in the robotics community which is called manipulability. The concept of manipulability for robots first introduced by Yoshikawa (1985a) in the 80's. He defined manipulability ellipsoid as the result of mapping Euclidean norm of joint velocities (i.e. \({\dot{\mathbf {q}}}^T {\dot{\mathbf {q}}}\)) to the end-effector velocity space. By using task space Jacobian (i.e. \(\mathbf {J}\)), he also proposed a manipulability metric for robots as \(w = \sqrt{\mathrm {det}(\mathbf {J}\mathbf {J}^T)}\) which represents the volume of the corresponding manipulability ellipsoid. The main issue with this measure is that, multiplying \(\mathbf {J}\), which is a velocity mapping function, and \(\mathbf {J}^T\), which is a force mapping function, is physically meaningless. In other words, in a general case, a robot may have different joint types (e.g. revolute and prismatic) and therefore different velocity and force units in the joints which makes the Jacobian to have columns with different units. This issue was first identified by Doty et al. (1995). They proposed using a weighting matrix in order to unify the units. However, even after that, many researchers used (Chiu 1987; Gravagne and Walker 2001; Guilamo et al. 2006; Jacquier-Bret et al. 2012; Lee 1989, 1997; Leven and Hutchinson 2003; Melchiorri 1993; Vahrenkamp et al. 2012; Valsamos and Aspragathos 2009) or suggested (Chiacchio et al. 1991; Koeppe and Yoshikawa 1997) same problematic metric for the manipulability of robots. Yoshikawa (1985b) also introduced dynamic manipulability metric and dynamic manipulability ellipsoid as extensions to his previous works on robot manipulability. He defined dynamic manipulability metric as \(w_d = \sqrt{\mathrm {det}[\mathbf {J}(\mathbf {M}^T \mathbf {M})^{-1} \mathbf {J}^T]}\), where \(\mathbf {M}\) is the joint-space inertia matrix, and dynamic manipulability ellipsoid as a result of mapping unit norm of joint torques to the operational acceleration space. Here, \((\mathbf {M}^T \mathbf {M})^{-1}\) can be regarded as a weighting matrix which obviously solves the main issue with the first manipulability metric. However, physical interpretation of this metric still remains unclear. In other words, it is not quite obvious what the relationship is between \(w_d\) and feasible or achievable operational space accelerations due to actual torque limits in the joints. Although, Yoshikawa (1985b) and later on some other researchers (Chiacchio 2000; Kurazume and Hasegawa 2006; Rosenstein and Grupen 2002; Tanaka et al. 2006; Yamamoto and Yun 1999) tried to include the effects of maximum joint torques into dynamic manipulability metric by normalizing the joint torques, their proposed normalizations are not done properly and therefore the results do not represent physical abilities of a robot in producing operational space accelerations. The issue with their suggested normalization will be discussed in more details in Sect. 3. Over the last two or three decades, many studies have been done on robot manipulability. Also many researchers have used manipulability metrics/ellipsoids in order to design more efficient robots or find better and more efficient configurations for robots to perform certain tasks (Ajoudani et al. 2015; Bagheri et al. 2015; Bowling and Khatib 2005; Guilamo et al. 2006; Kashiri and Tsagarakis 2015; Tanaka et al. 2006; Tonneau et al. 2014, 2016; Zhang et al. 2013). However, almost all of these studies have overlooked the effects of not using (or using inappropriate) a weighting matrix. In this paper, we focus on the weighting matrix for dynamic manipulability calculations and study its importance and influences on the dynamic manipulability analysis. We also show that, by using this analysis, we can decompose the effects of the gravity and robot's velocity from the effects of robot's configuration and inertial parameters on the acceleration of a point of interest (i.e. operational space acceleration). Therefore, the outcome of the dynamic manipulability analysis will be a configuration based (i.e. velocity independent) metric/ellipsoid which is dependent only on the physical properties of a robot and its configuration. Hence, we claim that, by selecting proper values for the weighting matrix, dynamic manipulability can provide a powerful tool to analyse and measure a robot's physical abilities to perform a task. This paper is an extended and generalized version of our previous study on dynamic manipulability of the center of mass (CoM) (Azad et al. 2017). Main contributions over our previous work are (i) generalizing the idea of weighting matrix for dynamic manipulability to any point of interest (not only the CoM), (ii) investigating the relationship between the dynamic manipulability and the Gauss' principle of least constraints by suggesting a proper weighting matrix, (iii) describing the relationship between the dynamic manipulability metrics and operational space control, and (iv) discussing the applications of the dynamic manipulability metrics based on the suggested choices of weighting matrices. We first derive dynamic manipulability equations for the operational space of a robot. To this aim, we use general motion equations in which the robot is assumed to have floating base with multiple contacts with the environment. Thus, the effects of under-actuation due to the floating base and kinematic constraints due to the contacts will be included in the calculations. As a result of our dynamic manipulability analysis, we obtain an ellipsoid which graphically shows the operational space accelerations due to the weighted unit norm of torques at the actuated joints. This is applicable to all types of robot manipulators as well as legged (floating base) robots with different contact conditions. The setting of the weights is up to the user which is supposed to be done based on the application. Two physically meaningful choices for the weights are introduced in this paper and their physical interpretations are discussed. We also discuss different manipulability metrics which can be computed using the equation of the manipulability ellipsoid. We investigate the application of those metrics in comparing various robot configurations and finding an optimal one in terms of the physical abilities of the robot to achieve a desired task. Dynamic manipulability Considering a floating base robot with multiple contacts with the environment, the inverse dynamics equation will be $$\begin{aligned} \mathbf {M}(\mathbf {q}) {\ddot{\mathbf {q}}} + \mathbf {h}(\mathbf {q}, {\dot{\mathbf {q}}}) = \mathbf {B}\varvec{\tau }- \mathbf {J}_c^T \mathbf {f}_c, \end{aligned}$$ where \(\mathbf {M}\) is \(n \times n\) joint-space inertia matrix, \(\mathbf {h}\) is n-dimensional vector of centrifugal, Coriolis and gravity forces, \(\mathbf {B}\) is \(n \times k\) selection matrix of the actuated joints, \(\varvec{\tau }\) is k-dimensional vector of joint torques, \(\mathbf {J}_c\) is \(l \times n\) Jacobian matrix of the constraints and \(\mathbf {f}_c\) is l-dimensional vector of constraint forces (and/or moments). Here, we assume that kinematic constraints are bilateral. This is a reasonable assumption if there is no slipping or loss of contact. In this case, we can write $$\begin{aligned} \mathbf {J}_c {\dot{\mathbf {q}}} = 0 \implies \mathbf {J}_c {\ddot{\mathbf {q}}} = - \dot{\mathbf {J}}_c {\dot{\mathbf {q}}} \, . \end{aligned}$$ By multiplying both sides of (1) by \(\mathbf {J}_c \mathbf {M}^{-1}\), replacing \(\mathbf {J}_c {\ddot{\mathbf {q}}}\) from (2) and rearranging the outcome equation, we will have $$\begin{aligned} \mathbf {f}_c = \mathbf {J}_f \varvec{\tau }+ \mathbf {f}_{vg}, \end{aligned}$$ $$\begin{aligned} \mathbf {J}_f = \mathbf {J}_{c_M}^{{\#}^T} \mathbf {B}\, \end{aligned}$$ is \(l \times k\) mapping matrix from joint torques to contact forces, $$\begin{aligned} \mathbf {f}_{vg} = -\mathbf {J}_{c_M}^{{\#}^T} \mathbf {h}+ (\mathbf {J}_c \mathbf {M}^{-1} \mathbf {J}_c^T)^{-1} \dot{\mathbf {J}}_c {\dot{\mathbf {q}}}, \end{aligned}$$ is part of contact forces which is due to the gravity and robot's velocity, and $$\begin{aligned} \mathbf {J}_{c_M}^{\#} = \mathbf {M}^{-1} \mathbf {J}_c^T (\mathbf {J}_c \mathbf {M}^{-1} \mathbf {J}_c^T)^{-1}, \end{aligned}$$ is the inertia-weighted pseudo-inverse of \(\mathbf {J}_c\). Plugging \(\mathbf {f}_c\) from (3) back into (1), yields the forward dynamics equation as $$\begin{aligned} {\ddot{\mathbf {q}}} = \mathbf {J}_q \varvec{\tau }+ {\ddot{\mathbf {q}}}_{vg}, \end{aligned}$$ $$\begin{aligned} {\ddot{\mathbf {q}}}_{vg} = -\mathbf {M}^{-1} (\mathbf {h}+ \mathbf {J}_c^T \mathbf {f}_{vg}), \end{aligned}$$ is the velocity and gravity dependent part of joint accelerations, and $$\begin{aligned} \mathbf {J}_q = \mathbf {M}^{-1} \mathbf {B}- \mathbf {M}^{-1} \mathbf {J}_c^T \mathbf {J}_f, \end{aligned}$$ is the mapping matrix from joint torques to joint accelerations. Observe that, \(\mathbf {J}_q\) can be simplified as $$\begin{aligned} \mathbf {J}_q = \mathbf {M}^{-1} (\mathbf {I}_{n \times n} - \mathbf {J}_c^T \mathbf {J}_{c_M}^{\#^T}) \mathbf {B}= \mathbf {M}^{-1} \mathbf {N}_{c_M}^T \mathbf {B}, \end{aligned}$$ where \(\mathbf {N}_{c_M}\) is the null-space projection matrix of \(\mathbf {J}_c\). Similarly, we can write the operational space acceleration in the form of $$\begin{aligned} \ddot{\mathbf {p}} = \mathbf {J}_p \varvec{\tau }+ \ddot{\mathbf {p}}_{vg}, \end{aligned}$$ $$\begin{aligned} \mathbf {J}_p = \mathbf {J}\mathbf {J}_q = \mathbf {J}\mathbf {M}^{-1} \mathbf {N}_{c_M}^T \mathbf {B}, \end{aligned}$$ is the mapping from joint torques to operational space acceleration, $$\begin{aligned} \ddot{\mathbf {p}}_{vg} = \mathbf {J}{\ddot{\mathbf {q}}}_{vg} + \dot{\mathbf {J}} {\dot{\mathbf {q}}}, \end{aligned}$$ is the velocity and gravity dependent part of \(\ddot{\mathbf {p}}\) and \(\mathbf {J}\) is the Jacobian of point \(\mathbf {p}\) in the operational space of the robot which implies \(\dot{\mathbf {p}} = \mathbf {J}{\dot{\mathbf {q}}}\). Available torques at the joints are always limited due to saturation limits which directly affects the accessible joint space and operational space accelerations. To investigate these effects, first we define limits on joint torques as $$\begin{aligned} \varvec{\tau }^T \mathbf {W}_\tau \varvec{\tau }\le 1, \end{aligned}$$ which is a unit weighted norm of actuated joints with \(\mathbf {W}_\tau \) as a \(k \times k\) weighting matrix. To find out the effects on \(\ddot{\mathbf {p}}\), we invert (11) as $$\begin{aligned} \varvec{\tau }= \mathbf {J}_p^\# (\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg}) + \mathbf {N}_p \varvec{\tau }_0, \end{aligned}$$ where \(\varvec{\tau }_0\) is a vector of arbitrary joint torques, \(\mathbf {N}_p = \mathbf {I}- \mathbf {J}_p^\# \mathbf {J}_p\) is the projection matrix to the null-space of \(\mathbf {J}_p\), and $$\begin{aligned} \mathbf {J}_p^\# = \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T (\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T)^{-1}, \end{aligned}$$ is a generalized inverse of \(\mathbf {J}_p\). By replacing \(\varvec{\tau }\) from (15) into (14), we will have $$\begin{aligned} 0 \le (\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg})^T (\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T)^{-1} (\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg}) \le 1 \, . \end{aligned}$$ The details of the derivations can be found in "Appendix I". The inequality in (17) defines an ellipsoid in the operational acceleration space which is called dynamic manipulability ellipsoid. The center of this ellipsoid is at \(\ddot{\mathbf {p}}_{vg}\) and its size and shape are determined by eigenvectors and eigenvalues of matrix \(\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T\). As it can be seen, This matrix is a function of the weighting matrix \(\mathbf {W}_\tau \) and also \(\mathbf {J}_p\) which is dependent on the robot's configuration and inertial parameters. Due to high influence of the weighting matrix on the dynamic manipulability ellipsoid, it is quite important to define \(\mathbf {W}_\tau \) properly in order to obtain a correct and physically meaningful mapping from the bounded joint torques to the operational space acceleration. This can be helpful in order to study the effects of limited joint torques on the operational space accelerations. Note that, if the weighting matrix is not defined properly, the outcome ellipsoid will be confusing and ambiguous rather than beneficial and useful. Weighting matrix In this section, we study the effects of the weighting matrix on the dynamic manipulability ellipsoid and propose two reasonable and physically meaningful choices for this matrix. First one is called bounded joint torques and incorporates saturation limits at the joints, and the second one is called bounded joint accelerations which assumes limits on the joint accelerations. The latter is also related to the Gauss' principle of least constraints which will be discussed further in this section. First choice: bounded joint torques The dynamic manipulability ellipsoid is defined to map available joint torques to the operational acceleration space. In order to include all available joint torques in the initial bounding inequality in (14), we introduce a weighting matrix as $$\begin{aligned} \mathbf {W}_\tau = \frac{1}{k} \, \mathrm {diag}\left( \left[ \frac{1}{\tau _{1_\mathrm {max}}^2}, \frac{1}{\tau _{2_\mathrm {max}}^2}, \ldots , \frac{1}{\tau _{k_\mathrm {max}}^2}\right] \right) , \end{aligned}$$ where \(\tau _{i_\mathrm {max}}\) is the saturation limit at the \(i^\mathrm {th}\) joint and function \(\mathrm {diag}(\mathbf {v})\) builds a diagonal matrix out of vector \(\mathbf {v}\). Note that, if we replace \(\mathbf {W}_\tau \) from (18) into (14), we will have $$\begin{aligned} \frac{\tau _1^2}{\tau _{1_\mathrm {max}}^2} + \frac{\tau _2^2}{\tau _{2_\mathrm {max}}^2} + \cdots + \frac{\tau _k^2}{\tau _{k_\mathrm {max}}^2} \le k, \end{aligned}$$ which guarantees that \(|\tau _i| < \tau _{i_\mathrm {max}}\) for each i and therefore it accommodates all possible combinations of joint torques. This is different from torque normalization which is mentioned in the literature (Ajoudani et al. 2017; Chiacchio 2000; Gu et al. 2015; Rosenstein and Grupen 2002). To the best of the authors' knowledge, none of the previous studies considered the number of actuators (i.e. k) in the weighting matrix which makes it an incorrect estimation of the feasible area. Figure 1 shows dynamic manipulability ellipses for a planar robot in six different configurations. The robot is consisted of five links which are connected via revolute joints. The first and last links are assumed to be passively in contact with the ground (to mimic a planar quadruped robot). The length and mass of the middle link are assumed to be twice the length and mass of the other links. Schematic diagrams of the robot configurations and the angles between the links are shown in the bottom left corner of each plot. Note that the middle link is horizontal. The ellipses are calculated for the center point of the middle link of the robot at each configuration. These points are shown by \(\otimes \) on the robots' schematic diagrams. The weighting matrix in (18) is used for the calculations where the number of actuators is 4 and the maximum torque at the actuators connected to the middle link are assumed to be twice the maximum torque at the other two actuators. The velocity and gravity are set to zero since their only effect would be to change the center point of the ellipses. Dynamic manipulability ellipses are proper approximations for the polygons. The ellipses are calculated for the center point of the middle link (shown by \(\otimes \)) of a planar robot in six different configurations. The weighting matrix in (18) is used for the calculations of the ellipses. The polygons represent feasible acceleration areas due to joint torque limits The shaded polygons in Fig. 1 represent exact areas in the acceleration space of the point of interest (i.e. the center point of the middle link) which are accessible due to the limited torques at the joints in six different configurations. These areas are computed using (11), numerically. As it can be seen in the plots, the polygons are always completely enclosed in the ellipses which implies that the dynamic manipulability ellipses, with the suggested weighting matrix in (18), are reasonable approximations of the exact feasible areas. These ellipses also graphically show that, given the limits at the joint torques, what accelerations are feasible in the operational space and what directions are easier to accelerate the point of interest. Note that, the choice of this point is dependent on the desired task. For example, for a balancing task, the CoM can be considered as the point of interest (Azad et al. 2017), whereas for a manipulation task, it makes more sense to choose the end-effector as the point of interest. It is worth mentioning that, the main purpose of the plots in Fig. 1 is to show the accuracy of the approximation of the polygons by the ellipses. Although, one can compare the robot configurations in terms of feasible operational space accelerations with same amount of available torques at the joints. As it can be seen in this figure, the ellipses (and also polygons) in the left column are larger than their corresponding ones in the right column which implies that by changing the angle from \(90^\circ \) to \(120^\circ \), the range of available accelerations at the point of interest is extended. Second choice: bounded joint accelerations To propose our second suggestion for the weighting matrix, first we assume limits on the joint accelerations as a unit weighted norm centered at \({\ddot{\mathbf {q}}}_{vg}\). This limit can be written as $$\begin{aligned} ({\ddot{\mathbf {q}}} - {\ddot{\mathbf {q}}}_{vg})^T \mathbf {W}_q ({\ddot{\mathbf {q}}} - {\ddot{\mathbf {q}}}_{vg}) \le 1, \end{aligned}$$ where \(\mathbf {W}_q\) is a positive definite weighting matrix in the joint acceleration space. This matrix can be used to unify the units and/or prioritize the importance of joint accelerations. By substituting \(({\ddot{\mathbf {q}}} - {\ddot{\mathbf {q}}}_{vg})\) from (7) into (20), we will have $$\begin{aligned} (\mathbf {J}_q \varvec{\tau })^T \mathbf {W}_q (\mathbf {J}_q \varvec{\tau }) = \varvec{\tau }^T ( \mathbf {J}_q^T \mathbf {W}_q \mathbf {J}_q) \varvec{\tau }\le 1, \end{aligned}$$ which implies that choosing the weighting matrix as $$\begin{aligned} \mathbf {W}_\tau = \mathbf {J}_q^T \mathbf {W}_q \mathbf {J}_q, \end{aligned}$$ converts the inequality in (21) to the one in (14). Thus, the ellipsoid in (17) will show the boundaries on the operational space accelerations due to the limited joint accelerations. This is true only if \(\mathbf {W}_\tau \) in (22) is positive definite or in other words, if \(\mathbf {J}_q\) is full column rank. Observe that, in general, \(\mathbf {J}_q\) could be rank deficient due to kinematic constraints. This happens when contact forces cancel out the effects of joint torques and result in zero motion at the joints (i.e. \({\ddot{\mathbf {q}}} = 0\) when \(\varvec{\tau }\ne 0\)). Mathematically, it means that a linear combination of the columns of \(\mathbf {J}_q\) becomes zero which implies that \(\mathbf {J}_q\) is rank deficient. This violates the positive definite assumption of \(\mathbf {W}_\tau \) and invalidates the results in (17). In this case, we define a new positive definite weighting matrix as $$\begin{aligned} \mathbf {W}_{r_q} = \mathbf {J}_{q_c}^T \mathbf {W}_q \mathbf {J}_{q_c}, \end{aligned}$$ where \(\mathbf {J}_{q_c}\) is a full column rank matrix obtained from the singular value decomposition of \(\mathbf {J}_q\). This is explained in "Appendix II". As a result of this decomposition we will have $$\begin{aligned} \mathbf {J}_q = \mathbf {J}_{q_c} \mathbf {J}_{q_r}, \end{aligned}$$ where \(\mathbf {J}_{q_r}\) is a full row rank matrix. Plugging (24) back into (21), yields $$\begin{aligned} \varvec{\tau }^T (\mathbf {J}_{q_r}^T \mathbf {J}_{q_c}^T \mathbf {W}_q \mathbf {J}_{q_c} \mathbf {J}_{q_r}) \varvec{\tau }= \varvec{\tau }^T_{r_q} \mathbf {W}_{r_q} \varvec{\tau }_{r_q} \le 1, \end{aligned}$$ where \(\varvec{\tau }_{r_q} = \mathbf {J}_{q_r} \varvec{\tau }\) is regarded as a reduced vector of the joint torques. The relationship between this vector and operational space accelerations can be acquired from (11) and (12) as $$\begin{aligned} \ddot{\mathbf {p}} = \mathbf {J}\mathbf {J}_{q_c} \varvec{\tau }_{r_q} + \ddot{\mathbf {p}}_{vg} \, . \end{aligned}$$ Therefore, the outcome ellipsoid in (17) will be $$\begin{aligned} 0 \le (\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg})^T (\mathbf {J}\mathbf {J}_{q_c} \mathbf {W}_{r_q}^{-1} \mathbf {J}_{q_c}^T \mathbf {J}^T)^{-1} (\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg}) \le 1 \, . \end{aligned}$$ This ellipsoid helps in studying the effects of bounded joint accelerations on operational space accelerations by assuming virtual limits on the joint accelerations. The intersection areas between colored ellipses and black ones are proper approximations of the corresponding colored areas. Colored ellipses are dynamic manipulability ellipses with the bounded joint accelerations and \(\mathbf {W}_q = \mathbf {M}\). Blue, yellow and red ellipses are related to different norms (1, 2 and 3, respectively) of the inequality in (30). The corresponding colored polygons show the feasible task space accelerations due to torque limits and subject to (30) (Color figure online) The intersection areas between colored ellipses and black ones in Fig. 2. The corresponding colored polygons show the feasible task space accelerations due to torque limits and subject to (30) (Color figure online) Relation to the Gauss' principle of least constraints The Gauss' principle of least constraints says that a constrained system always minimizes the inertia-weighted norm of the difference between its acceleration and what the acceleration would have been if there were no constraints (Fan et al. 2005; Lötstedt 1982). In general, robot's motion tasks can be regarded as virtual kinematic constraints which are enforced by control torques. Thus, to calculate the unconstrained robot's acceleration (i.e. \({\ddot{\mathbf {q}}}_u\)), both \(\mathbf {f}_c\) and \(\varvec{\tau }\) in (1) should be set to zero. So, we will have $$\begin{aligned} \mathbf {M}{\ddot{\mathbf {q}}}_u + \mathbf {h}= 0 \, \implies \, {\ddot{\mathbf {q}}}_u = - \mathbf {M}^{-1} \mathbf {h}\, . \end{aligned}$$ Therefore, the difference between \({\ddot{\mathbf {q}}}_u\) and the robot's acceleration in (7) is $$\begin{aligned} {\ddot{\mathbf {q}}} - {\ddot{\mathbf {q}}}_u = \mathbf {J}_q \varvec{\tau }+ {\ddot{\mathbf {q}}}_{vg} + \mathbf {M}^{-1} \mathbf {h}= \mathbf {J}_q \varvec{\tau }- \mathbf {M}^{-1} \mathbf {J}_c^T \mathbf {f}_{vg} \, . \end{aligned}$$ It is proved in "Appendix III" of this paper that the inertia-weighted norm of this difference is always greater than left hand side of (21) if \(\mathbf {W}_q\) is set to \(\mathbf {M}\). So, one can conclude that $$\begin{aligned} ({\ddot{\mathbf {q}}} - {\ddot{\mathbf {q}}}_u)^T \mathbf {M}({\ddot{\mathbf {q}}} - {\ddot{\mathbf {q}}}_u) \le 1 \, \implies \, \varvec{\tau }^T (\mathbf {J}_q^T \mathbf {M}\mathbf {J}_q) \varvec{\tau }\le 1 \, . \end{aligned}$$ It implies that, by setting \(\mathbf {W}_q = \mathbf {M}\), the ellipsoid in (27) represents the mapping in the task acceleration space of the function that is minimized in constrained systems according to the Gauss' principle. Note that, in a special case, where the robot is fully actuated and there is no constraint forces, we will have \(\mathbf {J}_{q_c} = \mathbf {J}_q = \mathbf {M}^{-1}\). Therefore, in this case, setting \(\mathbf {W}_q = \mathbf {M}\) will be equivalent to setting \(\mathbf {W}_\tau = \mathbf {M}^{-1}\) according to (22). The dynamic manipulability ellipsoid for this special case (with the above mentioned setting for the weighting matrix) will be the same as the generalized inertia ellipsoid which is introduced in Asada (1983). Figure 2 repeats the graphs in Fig. 1 including new colored ellipses and areas. The blue, yellow and red ellipses show dynamic manipulability ellipses which are calculated using (27), where the joint weighting matrix \(\mathbf {W}_q\) is set to \(\mathbf {M}\), \(\frac{1}{4}\mathbf {M}\) and \(\frac{1}{9}\mathbf {M}\), respectively. Note that, the factor of \(\mathbf {M}\) in \(\mathbf {W}_q\) actually determines the norm of the inequality in (30). Obviously, this norm is 1, 2 and 3 for the blue, yellow and red ellipses, respectively. The colored polygons in the plots represent the corresponding exact feasible areas which are the results of mapping the joint accelerations in (30) to the task acceleration space given the torque saturation limits. These areas are obtained by evaluating (11) numerically subject to the inequality in the left hand side of (30) and also the torque limits. The intersection areas between the colored ellipses and the black ones are shown in Fig. 3. The colored polygons in this figure are the same as those in Fig. 2. According to Fig. 3, the intersection areas are reasonable approximations of the exact areas shown by corresponding colored polygons. However, in the top two plots, the approximations are not as good as the other ones. The reason is that in these two plots, there are relatively large gaps between the feasible areas due to the torque limits only (i.e. gray polygons) and the dynamic manipulability ellipse with bounded joint torques (i.e. black ellipse) which directly affects the estimation of the colored areas. This is inevitable in some configurations for robots with under-actuation and/or kinematic constraints due to the rank deficiency of \(\mathbf {J}_q\). As it can be seen in Fig. 2, the colored ellipses for each configuration have the same shape but different sizes. The shapes are the same since they are mapping the same equation (30), and the sizes are different since the values of the norm in this equation are different. The axis of the larger radius of the colored ellipses shows the direction in the task acceleration space in which lower inertia-weighted norm of \(({\ddot{\mathbf {q}}}-{\ddot{\mathbf {q}}}_u)\) is achievable. Hence, it is ideal to have the larger radii of both black and colored ellipses in a same direction to provide larger intersection area between them. In that case, larger part of the feasible area (i.e. the gray area which is estimated by black ellipse) would be covered by the colored areas implying that more points in the operational acceleration space will be achievable by lower inertia-weighted norm of \(({\ddot{\mathbf {q}}} - {\ddot{\mathbf {q}}}_u)\). In other words, although it is beneficial to have larger ellipsoids of both types (i.e. bounded joint torques and bounded joint accelerations with \(\mathbf {W}_q = \mathbf {M}\)), it is also desirable to have both ellipsoids in a same direction to maximize the intersection area between them. Manipulability metrics We define the manipulability matrix as the matrix that determines the size and shape of the manipulability ellipsoid. Thus, if we write both manipulability ellipsoid inequalities in (17) and (27) as $$\begin{aligned} 0 \le (\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg})^T \mathbf {A}^{-1} (\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg}) \le 1, \end{aligned}$$ then \(\mathbf {A}\) will be the manipulability matrix which is \(\mathbf {A}= \mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T\) for (17) and \(\mathbf {A}= \mathbf {J}\mathbf {J}_{q_c} \mathbf {W}_{r_q}^{-1} \mathbf {J}_{q_c}^T \mathbf {J}^T\) for (27). As mentioned earlier in Sect. 1, the square root of the determinant of the manipulability matrix (i.e. \(w = \sqrt{\mathrm {det}(\mathbf {A})}\)) is defined as a manipulability metric in most of the studies in the literature (Lee 1997; Vahrenkamp et al. 2012; Yoshikawa 1985b, 1991). This metric represents the volume of the manipulability ellipsoid and shows the ability to accelerate the point of interest in all directions in general. Most of the times, we want to measure the ability to accelerate the robot in a certain direction. To this aim, some studies (Chiu 1987; Koeppe and Yoshikawa 1997; Lee and Lee 1988; Lee 1989) proposed the length of the manipulability ellipsoid in the desired direction as a suitable metric. This length is actually the distance between the center point and the intersection of the desired direction and surface of the ellipsoid. As an example for a 2D case, this length is shown by d in Fig. 4, where the desired direction is denoted by \(\mathbf {u}\). To calculate d, since the intersection point is on the surface of the ellipsoid, we replace \((\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg})\) with \(d \frac{\mathbf {u}}{|\mathbf {u}|}\) in the equality form of (31). Therefore, $$\begin{aligned} \left( \frac{d \mathbf {u}}{|\mathbf {u}|}\right) ^T \mathbf {A}^{-1} \frac{d \mathbf {u}}{|\mathbf {u}|} = 1 = \frac{d^2}{|\mathbf {u}|^2} \mathbf {u}^T \mathbf {A}^{-1} \mathbf {u}, \end{aligned}$$ which implies that $$\begin{aligned} d = |\mathbf {u}| (\mathbf {u}^T \mathbf {A}^{-1} \mathbf {u})^{-\frac{1}{2}} \, . \end{aligned}$$ An example of a manipulability ellipse and geometrical descriptions of manipulability metrics Another useful measure would be the orthogonal projection of the ellipsoid in the desired direction which is shown by s in Fig. 4 for an example of a 2D case. This projection indicates the maximum acceleration of the point of interest in the direction \(\mathbf {u}\), though achieving that acceleration may result in some accelerations in other directions, as well. To calculate s, we use the method and equations which are described in Pope (2008). To do so, we first rewrite the ellipsoid inequality in (31) to conform with the form that is mentioned in Pope (2008). Since \(\mathbf {A}\) is a symmetric matrix, its Eigendecomposition results in \(\mathbf {A}= \mathbf {Q}\varvec{\varLambda }\mathbf {Q}^T\), where \(\mathbf {Q}\) is an orthogonal matrix and \(\varvec{\varLambda }\) is a diagonal matrix of eigenvalues of \(\mathbf {A}\). Note that \(\mathbf {A}^{-1} = \mathbf {Q}\varvec{\varLambda }^{-1} \mathbf {Q}^T\) and \(\mathbf {A}^{-1/2} = \mathbf {Q}\varvec{\varLambda }^{-1/2} = \varvec{\varLambda }^{-1/2} \mathbf {Q}^T\). So, we can rewrite (31) as $$\begin{aligned} |(\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg})^T \mathbf {Q}\varvec{\varLambda }^{-\frac{1}{2}}| = |\varvec{\varLambda }^{-\frac{1}{2}} \mathbf {Q}^T (\ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg})| \le 1 \, . \end{aligned}$$ According to Pope (2008), for an ellipsoid with the form of (34), s can be calculated via $$\begin{aligned} s = \frac{|\mathbf {u}^T \mathbf {Q}\varvec{\varLambda }^{\frac{1}{2}}|}{|\mathbf {u}|} = \frac{(\mathbf {u}^T \mathbf {Q}\varvec{\varLambda }\mathbf {Q}^T \mathbf {u})^{\frac{1}{2}}}{|\mathbf {u}|} = \frac{1}{|\mathbf {u}|} (\mathbf {u}^T \mathbf {A}\mathbf {u})^{\frac{1}{2}} \end{aligned}$$ For the details of calculations readers are referred to Pope (2008). Applications of manipulability metrics In this section, we explain the application of manipulability metrics through two examples. In these examples, we (i) compare different robot configurations (in Sect. 5.1), and (ii) find an optimal configuration (in Sect. 5.2) for a robot to accelerate its end-effector in desired directions. To this aim, the proper metric is the length of manipulability ellipsoid which is d in (33). The robot is assumed to be a three degrees of freedom RRR planar robot. Each link of this robot has unit mass and unit length with its CoM at the middle point. Example I: Comparing robot configurations In this example, we consider six different configurations of the planar robot and plot the bounded joint torques ellipses using (17), and bounded joint accelerations ellipses using (27) for the end-effector of that robot. These ellipses are shown in Fig. 5 by black and gray colors, respectively. For the bounded joint torques ellipses we assume that the torque limits are the same for all joints (i.e. \(\tau _{max} = 0.5\)) and for the bounded joint accelerations ellipses, we set \(\mathbf {W}_q = \mathbf {M}\) to conform to the Gauss' principle of least constraints. We also calculate lengths of the ellipses for three desired directions using (33). The desired directions are (i) the horizontal, (ii) \(45^\circ \) to the horizontal, and (iii) the vertical, which are shown by vectors in the plots in Fig. 5. The values of these lengths are reported in Tables 1 and 2 under columns \(d_1\) for bounded joint torques ellipses and \(d_2\) for bounded joint accelerations ellipses. Comparing dynamic manipulability ellipses for the end-effector of a planar RRR robot in six different configurations. Black and gray ellipses are bounded joint torques and bounded joint accelerations ellipses, respectively. Torque saturation limits at the joints are assumed to be the same and \(\mathbf {W}_q = \mathbf {M}\) for the bounded joint accelerations ellipses to conform with the Gauss' principle. Three desired directions are shown by vectors In Tables 1 and 2, \(||\varvec{\tau }||\) and \(||\varvec{\tau }||_{\mathbf {M}^{-1}} = (\varvec{\tau }^T \mathbf {M}^{-1} \varvec{\tau })^{\frac{1}{2}}\) are respectively the norms and inverse inertia-weighted norms of the minimum joint torques which are required to accelerate the robot's end-effector by one unit in the desired directions at each configuration. Minimum joint torques are calculated by using (15) assuming that \(\varvec{\tau }_0 = 0\) and also \(\ddot{\mathbf {p}}_{vg}=0\) (i.e. velocity and gravity are set to zero). Note that, for these calculations, \(\mathbf {J}_p^\#\) needs to be computed via (16) which depends onthe weighting matrix \(\mathbf {W}_\tau \). In order to be able to compare the norms of the minimum joint torques with the relevant manipulability metrics (i.e. \(d_1\) and \(d_2\)), \(\mathbf {W}_\tau \) in (16) is assumed to be identity for the torques in Table 1, and \(\mathbf {M}^{-1}\) for the torques in Table 2. Setting \(\mathbf {W}_\tau \) to identity conforms to the setting in (18) when saturation limits are the same and setting \(\mathbf {W}_\tau = \mathbf {M}^{-1}\) agrees with (30) since \(\mathbf {J}_q = \mathbf {M}^{-1}\) for this robot. It is worth mentioning that, these two settings for \(\mathbf {W}_\tau \) are the most common ones in the operational space control frameworks (Peters and Schaal 2008). Table 1 Norm of the minimum joint torques and (black) ellipse lengths for six different robot configurations and three desired directions according to Fig. 5 Table 2 Inverse inertia-weighted norm of the minimum joint torques and (gray) ellipse lengths for six different robot configurations and three desired directions according to Fig. 5 As it can be seen in both Tables 1 and 2, wherever the norm or the weighted norm of joint torques is higher the corresponding manipulability metric is lower and vice versa. In other words, norms or weighted norms of the torques are inversely related to the corresponding manipulability metrics \(d_1\) or \(d_2\). It implies that, maximizing manipulability metrics is the dual problem of minimizing the (weighted) norm of the joint torques. Therefore, one can optimize the relevant dynamic manipulability metric in order to maximize the robot performance or efficiency to perform a certain task. This will be described in the next example. Another advantage of using dynamic manipulability analysis is that it provides a graphical representation of the mapping from the joint torques to the operational acceleration space which can help in better understanding the problem specially if it is a planar one. For example, comparing the plots in each row of Fig. 5, one can conclude that the left hand side ones are referring to better (more efficient) configurations for accelerating the robot's end-effector in the desired directions. This is because both black and gray ellipses in the left column plots (odd numbers) are extended in the same direction as the desired ones, whereas in the right column plots (even numbers) at least one of the ellipses is not extended in the desired direction. This conclusion agrees with the values mentioned in the diagonal components of Tables 1 and 2 since the norm or weighted norm of the joint torques are lower in odd number plots compared to the corresponding even ones. Example II: Optimizing the robot configuration In the second example, we find optimal configurations for the robot in order to minimize the norm and inverse inertia-weighted norm of the joint torques. The task is to accelerate the robot's end-effector in the direction of \(60^\circ \) to the horizontal while the position of the end-effector is at \(\mathbf {p}= (0.5,1.5)\). This is a typical redundancy resolution problem in the operational space control. Figure 6 shows bounded joint torques ellipses (black) and bounded joint accelerations (conforms with the Gauss' principle) ellipses (gray) for the robot in two optimal configurations. These configurations, which are shown in the right column of Fig. 6, are the outcomes of an optimization algorithm. This algorithm maximizes the length of black and gray ellipses in the desired direction for the bottom and top plots, respectively. The desired direction is shown by vectors in the plots. The optimization problem has the following form: $$\begin{aligned} \begin{aligned}&\underset{\mathbf {q}}{\text {maximize}}&d(\mathbf {q}) \\&\text {subject to}&\mathbf {q}_l \le \mathbf {q}\le \mathbf {q}_u \end{aligned} \end{aligned}$$ where \(\mathbf {q}_l\) and \(\mathbf {q}_u\) are the lower and upper limits of the joints. Note that d is calculated using (33) and is dependent on \(\mathbf {q}\) via the \(\mathbf {A}\) matrix. Two optimal configurations for a planar RRR robot (right column) and corresponding dynamic manipulability ellipses (left column). The black and gray ellipses are bounded joint torques and bounded joint accelerations ellipses, respectively. For the former, torque saturation limits at the joints are assumed to be the same and for the latter \(\mathbf {W}_q = \mathbf {M}\) to conform to the Gauss' principle According to Fig. 6, depending on the objective function, which is maximizing the length of either black or gray ellipse in the desired direction, the optimal configuration of the robot is different. The values of the optimal lengths of the black and gray ellipses are mentioned in Table 3 under columns \(d_1\) and \(d_2\), respectively. The norm and inverse inertia-weighted norm of required joint torques in the optimal configurations are also reported in the table. As can be seen in this table, the norm of the joint torques is lower in the bottom plot compared to the top one, whereas the inverse inertia-weighted norm of the joint torques in the top plot is lower compared to the bottom one. This agrees with the values of \(d_1\) and \(d_2\) which are the corresponding metrics. Note that, inverse inertia-weighed norm of the joint torques is representing the inertia-weighted norm of \(({\ddot{\mathbf {q}}} - {\ddot{\mathbf {q}}}_u)\) for this robot. It implies that in the top plot, the inertia-weighted norm of joint accelerations is lower although the norm of the joint torques is higher. Therefore, by using dynamic manipulability analysis, we can optimize a robot's configuration in terms of torque and/or acceleration efficiency. It is worth mentioning that, in this particular example, even the norm of joint accelerations is lower in the top plot compared to the bottom one. The values of joint accelerations, required to accelerate the end-effector in the desired direction, are \({\ddot{\mathbf {q}}}_{bottom} = (0.53, -1.14, 1.99)^T\) for the bottom plot and \({\ddot{\mathbf {q}}}_{top} = (0.12, 0.23, -1.23)^T\) for the top one. So, the norm of joint accelerations are 2.36 and 1.26, respectively. Table 3 Norm and weighted norm of the minimum joint torques and lengths of the ellipses for two optimal robot configurations in Fig. 6 We revisited the concept of dynamic manipulability analysis for robots and derived the corresponding equations for floating base robots with multiple contacts with the environment. The outcomes of this analysis are a manipulability ellipsoid which is dependent on a weighting matrix, and different manipulability metrics which are extracted from the ellipsoid. We described the importance of the weighting matrix which is included in the equations and claimed that, by using proper weighting matrix, dynamic manipulability can be a useful tool in order to study, analyse and measure physical abilities of robots in different tasks. We suggested two physically meaningful options for the weighting matrix and explained their applications in comparing different robot configurations and finding an optimal one using two illustrative examples. The dynamic manipulability analysis can be performed for any point of interest of a robot according to the desired task. Ajoudani, A., Tsagarakis, N., & Bicchi, A. (2015). On the role of robot configuration in Cartesian stiffness. In IEEE international conference on robotics and automation, Seattle, WA (pp. 1010–1016). Ajoudani, A., Tsagarakis, N., & Bicchi, A. (2017). Choosing poses for force and stiffness control. IEEE Transactions on Robotics, 33(6), 1483–1490. Asada, H. (1983). A geometrical representation of manipulator dynamics and its application to arm design. Journal of Dynamic Systems, Measurement and Control, 105(3), 131–142. Article MATH Google Scholar Azad, M., Babič, J., & Mistry, M. (2017). Dynamic manipulability of the center of mass: a tool to study, analyse and measure physical ability of robots. In IEEE international conference on robotics and automation, Singapore (pp. 3484–3490). Bagheri, M., Ajoudani, A., Lee, J., Caldwell, D., & Tsagarakis, N. (2015). Kinematic analysis and design considerations for optimal base frame arrangement of humanoid shoulders. In IEEE international conference on robotics and automation, Seattle, WA (pp. 2710–2715). Bowling, A., & Khatib, O. (2005). The dynamic capability equations: a new tool for analyzing robotic manipulator performance. IEEE Transactions on Robotics, 21(1), 115–123. Chiacchio, P. (2000). A new dynamic manipulability ellipsoid for redundant manipulators. Robotica, 18(4), 381–387. Chiacchio, P., Chiaverini, S., Sciavicco, L., & Siciliano, B. (1991). Global task space manipulability ellipsoids for multiple-arm systems. IEEE Transactions on Robotics and Automation, 7(5), 678–685. Chiu, S. (1987). Control of redundant manipulators for task compatibility. In IEEE international conference on robotics and automation, Raleigh, NC (pp. 1718–1724). Doty, K., Melchiorri, C., Schwartz, E., & Bonivento, C. (1995). Robot manipulability. IEEE Transactions on Robotics and Automation, 11(3), 462–468. Fan, Y., Kalaba, R., Natsuyama, H., & Udwadia, F. (2005). Reflections on the Gauss principle of least constraint. Journal of Optimization Theory and Applications, 127(3), 475–484. Article MathSciNet MATH Google Scholar Gravagne, I., & Walker, I. (2001). Manipulability and force ellipsoids for continuum robot manipulators. In IEEE/RSJ international conference on intelligent robots and systems, Maui, Hawaii (pp. 304–311). Gu, Y., Lee, C., & Yao, B. (2015). Feasible center of mass dynamic manipulability of humanoid robots. In IEEE international conference on robotics and automation, Seattle, Washington (pp. 5082–5087). Guilamo, L., Kuffner, J., Nishiwaki, K., & Kagami, S. (2006). Manipulability optimization for trajectory generation. In IEEE international conference on robotics and automation, Orlando, Florida (pp. 2017–2022). Jacquier-Bret, J., Gorce, P., & Rezzoug, N. (2012). The manipulability: A new index for quantifying movement capacities of upper extremity. Ergonomics, 55(1), 69–77. Kashiri, N., & Tsagarakis, N. (2015). Design concept of CENTAURO robot. H2020 Deliverable D2.1. Koeppe, R., & Yoshikawa, T. (1997). Dynamic manipulability analysis of compliant motion. In IEEE/RSJ international conference on intelligent robots and systems, Grenoble, France (pp. 1472–1478). Kurazume, R., & Hasegawa, T. (2006). A new index of serial-link manipulator performance combining dynamic manipulability and manipulating force ellipsoids. IEEE Transactions on Robotics, 22(5), 1022–1028. Leavitt, J., Bobrow, J., & Sideris, A. (2004). Robust balance control of a one-legged, pneumatically-actuated, acrobot-like hopping robot. In Proceedings of IEEE international conference on robotics and automation. New Orleans, LA (pp. 4240–4245). Lee, J. (1997). A study on the manipulability measures for robot manipulators. In IEEE/RSJ international conference on intelligent robots and systems, Grenoble, France (pp. 1458–1465). Lee, S. (1989). Dual redundant arm configuration optimization with task-oriented dual arm manipulability. IEEE Transactions on Robotics and Automation, 5(1), 78–97. Lee, S., & Lee, J. (1988). Task-oriented dual-arm manipulability and its application to configuration optimization. In Proceedings conference on decision and control, Austin, Texas (pp. 22553–2260). Leven, P., & Hutchinson, S. (2003). Using manipulability to bias sampling during the construction of probabilistic roadmaps. IEEE Transactions on Robotics and Automation, 19(6), 1020–1026. Lötstedt, P. (1982). Mechanical systems of rigid bodies subject to unilateral constraints. SIAM Journal of Applied Mathematics, 42(2), 281–296. Article MathSciNet Google Scholar Melchiorri, C. (1993). Comments on "global task space manipulability ellipsoids for multiple-arm systems" and further considerations. IEEE Transaction on Robotics and Automation, 9(2), 232–236. Peters, J., & Schaal, S. (2008). Learning to control in operational space. International Journal of Robotics Research, 27(2), 197–212. Pope, S. (2008). Algorithms for ellipsoids. Cornell University. Ithaca, New York. Report No. FDA-08-01, February Rosenstein, M., & Grupen, R. (2002). Velocity-dependent dynamic manipulability. In IEEE international conference on robotics and automation, Washington, DC (pp. 2424–2429). Tanaka, Y., Shiokawa, M., Yamashita, H., & Tsuji, T. (2006). Manipulability analysis of kicking motion in soccer based on human physical properties. In IEEE international conference on system, man and cybernetics, Taipei, Taiwan (pp. 68–73). Tonneau, S., Del Prete, A., Pettré, J., Park, C., Manocha, D., & Mansard, N. (2016). An efficient acyclic contact planner for multiped robots. International Journal of Robotics Research, 34(3), 586–601. Tonneau, S., Pettré, J., & Multon, F. (2014). Using task efficient contact configurations to animate creatures in arbitrary environments. Computer and Graphics, 45, 40–50. Vahrenkamp, N., Asfour, T., Metta, G., Sandini, G., & Dillmann, R. (2012). Manipulability analysis. In IEEE-RAS international conference on humanoid robots, Osaka, Japan (pp. 568–573). Valsamos, H., & Aspragathos, N. (2009). Determination of anatomy and configuration of a reconfigurable manipulator for the optimal manipulability. In ASME/IFToMM international conference on reconfigurable mechanisms and robots, London, UK (pp. 505–511). Yamamoto, Y., & Yun, X. (1999). Unified analysis on mobility and manipulability of mobile manipulators. In IEEE international conference on robotics and automation, Detroit, Michigan (pp. 1200–1206). Yoshikawa, T. (1985a). Manipulability of robotic mechanisms. International Journal of Robotics Research, 4(2), 3–9. Yoshikawa, T. (1985b). Dynamic manipulability of robot manipulators. Journal of Robotic Systems, 2(1), 113–124. MathSciNet Google Scholar Yoshikawa, T. (1991). Translational and rotational manipulability of robotic manipulators. In IEEE international conference on industrial electronics, control and instrumentation, Kobe, Japan (pp. 1170–1175). Zhang, T., Minami, M., Yasukura, O., & Song, W. (2013). Reconfiguration manipulability analyses for redundant robots. Journal of Mechanisms and Robotics, 5(4), 041001. School of Computer Science, University of Birmingham, Edgbaston, UK Morteza Azad Laboratory for Neuromechanics and Biorobotics, Department of Automation, Biocybernetics and Robotics, Jožef Stefan Institute, Jamova cesta 39, 1000, Ljubljana, Slovenia Jan Babič School of Informatics, University of Edinburgh, Edinburgh, UK Michael Mistry Correspondence to Morteza Azad. First, we define \(\ddot{\mathbf {p}}_\varDelta = \ddot{\mathbf {p}} - \ddot{\mathbf {p}}_{vg}\). Therefore, by replacing (15) into (14), we will have $$\begin{aligned} 1 \ge \varvec{\tau }^T \mathbf {W}_\tau \varvec{\tau }= (\mathbf {J}_p^\# \ddot{\mathbf {p}}_\varDelta + \mathbf {N}_p \varvec{\tau }_0)^T \mathbf {W}_\tau (\mathbf {J}_p^\# \ddot{\mathbf {p}}_\varDelta + \mathbf {N}_p \varvec{\tau }_0) \, . \end{aligned}$$ Hence, $$\begin{aligned} 1\ge & {} \ddot{\mathbf {p}}_\varDelta ^T \mathbf {J}_p^{\#^T} \mathbf {W}_\tau \mathbf {J}_p^\# \ddot{\mathbf {p}}_\varDelta + \ddot{\mathbf {p}}_\varDelta ^T \mathbf {J}_p^{\#^T} \mathbf {W}_\tau \mathbf {N}_p \varvec{\tau }_0\nonumber \\&{}+\varvec{\tau }_0^T \mathbf {N}_p^T \mathbf {W}_\tau \mathbf {J}_p^\# \ddot{\mathbf {p}}_\varDelta + \varvec{\tau }_0^T \mathbf {N}_p^T \mathbf {W}_\tau \mathbf {N}_p \varvec{\tau }_0 \, . \end{aligned}$$ We show that the second and third terms in the right hand side of the above equation are zero: $$\begin{aligned} \ddot{\mathbf {p}}_\varDelta ^T \mathbf {J}_p^{\#^T} \mathbf {W}_\tau \mathbf {N}_p \varvec{\tau }_0 = \varvec{\tau }_0^T \mathbf {N}_p^T \mathbf {W}_\tau \mathbf {J}_p^\# \ddot{\mathbf {p}}_\varDelta = 0 \, . \end{aligned}$$ To prove that, we only need to show that either \(\mathbf {J}_p^{\#^T} \mathbf {W}_\tau \mathbf {N}_p\) or \(\mathbf {N}_p^T \mathbf {W}_\tau \mathbf {J}_p^\#\) is zero since they are transpose of each other. By replacing \(\mathbf {J}_p^\#\) from (16) and also \(\mathbf {N}_p\), we will have $$\begin{aligned} \mathbf {J}_p^{\#^T} \mathbf {W}_\tau \mathbf {N}_p= & {} (\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T)^{-1} \mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {W}_\tau (\mathbf {I}- \mathbf {J}_p^\# \mathbf {J}_p) \\= & {} (\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T)^{-1} \mathbf {J}_p - (\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T)^{-1} \\&\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T (\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T)^{-1} \mathbf {J}_p \\= & {} 0 \, . \end{aligned}$$ Therefore, (37) yields $$\begin{aligned} \ddot{\mathbf {p}}_\varDelta ^T \mathbf {J}_p^{\#^T} \mathbf {W}_\tau \mathbf {J}_p^\# \ddot{\mathbf {p}}_\varDelta + \varvec{\tau }_0^T \mathbf {N}_p^T \mathbf {W}_\tau \mathbf {N}_p \varvec{\tau }_0 \le 1 \, . \end{aligned}$$ Knowing that both terms in the above equation are positive, we can conclude that $$\begin{aligned} 0 \le \ddot{\mathbf {p}}_\varDelta ^T \mathbf {J}_p^{\#^T} \mathbf {W}_\tau \mathbf {J}_p^\# \ddot{\mathbf {p}}_\varDelta \le 1 \, . \end{aligned}$$ By replacing \(\mathbf {J}_p^\#\) from (16), we will have $$\begin{aligned} 0 \le \ddot{\mathbf {p}}_\varDelta ^T (\mathbf {J}_p \mathbf {W}_\tau ^{-1} \mathbf {J}_p^T)^{-1} \ddot{\mathbf {p}}_\varDelta \le 1, \end{aligned}$$ Appendix II It is known from linear algebra that, for any matrix, there always exist a factorization into a product of three matrices. This factorization is called the singular value decomposition (SVD). For \(\mathbf {J}_q\), which is a \(n \times k\) matrix, the SVD can be written as $$\begin{aligned} \mathbf {J}_q = \mathbf {U}\mathbf {S}\mathbf {V}^T, \end{aligned}$$ where \(\mathbf {U}\) and \(\mathbf {V}\) are \(n \times n\) and \(k \times k\) unitary matrices, respectively, and \(\mathbf {S}\) is \(n \times k\) diagonal matrix. The non-zero elements in the diagonal of \(\mathbf {S}\) are called the singular values of \(\mathbf {J}_q\). Since \(n > k\), \(\mathbf {S}\) has the form of $$\begin{aligned} \mathbf {S}= \begin{bmatrix} \varvec{\varSigma }\\ \mathbf {0}_{(n-k) \times k} \end{bmatrix} \, , \end{aligned}$$ where \(\varvec{\varSigma }\) is a \(k \times k\) diagonal matrix. Here, it is assumed that \(\mathbf {J}_q\) is rank deficient implying that the rank of \(\mathbf {J}_q\) is \(r < k\). Thus, \((k-r)\) of the singular values are zeros and \(\mathbf {S}\) can be written as $$\begin{aligned} \mathbf {S}= \begin{bmatrix} \varvec{\varSigma }_1&\mathbf {0}_{r \times (k-r)} \\ \mathbf {0}_{(n-r) \times r}&\qquad \mathbf {0}_{(n-r) \times (k-r)} \end{bmatrix} \, , \end{aligned}$$ where \(\varvec{\varSigma }_1\) is a \(r \times r\) diagonal matrix including the singular values of \(\mathbf {J}_q\). Now, if we multiply \(\mathbf {U}\) by \(\mathbf {S}\), we will have $$\begin{aligned} \mathbf {U}\mathbf {S}= [\mathbf {U}_1 \; \mathbf {U}_2] \mathbf {S}= [\mathbf {U}_1 \varvec{\varSigma }_1 \; \; \mathbf {0}_{n \times (k-r)}], \end{aligned}$$ where \(\mathbf {U}_1\) and \(\mathbf {U}_2\) are consisted of the first r columns and the last \((n-r)\) columns of \(\mathbf {U}\), respectively. Let \(\mathbf {V}_1\) and \(\mathbf {V}_2\) denote matrices which are consisted of the first r columns and the last \((k-r)\) columns of \(\mathbf {V}\), respectively. Therefore, \(\mathbf {J}_q\) can be written as $$\begin{aligned} \mathbf {J}_q = \mathbf {U}\mathbf {S}\mathbf {V}^T = [\mathbf {U}_1 \varvec{\varSigma }_1 \; \; \mathbf {0}_{n \times (k-r)}] \begin{bmatrix} \mathbf {V}_1^T \\ \\ \mathbf {V}_2^T \end{bmatrix} = \mathbf {U}_1 \varvec{\varSigma }_1 \mathbf {V}_1^T \, . \end{aligned}$$ Now, we can define \(\mathbf {J}_{q_c} = \mathbf {U}_1 \varvec{\varSigma }_1\) which is a \(n \times r\) matrix and \(\mathbf {J}_{q_r} = \mathbf {V}_1^T\) which is a \(r \times k\) matrix. Hence, $$\begin{aligned} \mathbf {J}_q = \mathbf {J}_{q_c} \mathbf {J}_{q_r} \, . \end{aligned}$$ Note that both \(\mathbf {J}_{q_c}\) and \(\mathbf {J}_{q_r}\) have the rank of r. The above decomposition for \(\mathbf {J}_q\) is not unique. The obvious reason is that it is always possible to create new matrices from the above mentioned \(\mathbf {J}_{q_c}\) and \(\mathbf {J}_{q_r}\) by multiplying one of them by \(\mathbf {R}\) and the other one by \(\mathbf {R}^{-1}\), such as $$\begin{aligned} \mathbf {J}_q = \mathbf {J}_{q_c} (\mathbf {R}\mathbf {R}^{-1}) \mathbf {J}_{q_r} = (\mathbf {J}_{q_c} \mathbf {R}) (\mathbf {R}^{-1} \mathbf {J}_{q_r}), \end{aligned}$$ where \(\mathbf {R}\) is an arbitrary full-rank \(r \times r\) matrix. The non-uniqueness of \(\mathbf {J}_{q_c}\) has no effects on the corresponding ellipsoid inequality in (27). To prove that, we replace \(\mathbf {J}_{q_c}\) by \(\mathbf {J}_{q_c} \mathbf {R}\) in the matrix in (27). So, we will have $$\begin{aligned} \mathbf {A}= & {} \mathbf {J}(\mathbf {J}_{q_c} \mathbf {R}) \mathbf {W}_{r_q}^{-1} (\mathbf {R}^T \mathbf {J}_{q_c}^T) \mathbf {J}^T \\= & {} \mathbf {J}\mathbf {J}_{q_c} \mathbf {R}(\mathbf {R}^T \mathbf {J}_{q_c}^T \mathbf {W}_q \mathbf {J}_{q_c} \mathbf {R})^{-1} \mathbf {R}^T \mathbf {J}_{q_c}^T \mathbf {J}^T \\= & {} \mathbf {J}\mathbf {J}_{q_c} \mathbf {R}\mathbf {R}^{-1} (\mathbf {J}_{q_c}^T \mathbf {W}_q \mathbf {J}_{q_c})^{-1} (\mathbf {R}^T)^{-1} \mathbf {R}^T \mathbf {J}_{q_c}^T \mathbf {J}^T \\= & {} \mathbf {J}\mathbf {J}_{q_c} (\mathbf {J}_{q_c}^T \mathbf {W}_q \mathbf {J}_{q_c})^{-1} \mathbf {J}_{q_c}^T \mathbf {J}^T \\= & {} \mathbf {J}\mathbf {J}_{q_c} \mathbf {W}_{r_q}^{-1} \mathbf {J}_{q_c}^T \mathbf {J}^T , \end{aligned}$$ which proves that, by any arbitrary choice of a full-rank factorization of \(\mathbf {J}_q\), the original matrix in (27) will remain the same. Appendix III By plugging (29) into the left hand side of (30), we will have $$\begin{aligned} (\mathbf {J}_q \varvec{\tau }- \mathbf {M}^{-1}\mathbf {J}_c^T \mathbf {f}_{vg})^T \mathbf {M}(\mathbf {J}_q \varvec{\tau }- \mathbf {M}^{-1}\mathbf {J}_c^T \mathbf {f}_{vg}) \le 1, \end{aligned}$$ which can be expanded to $$\begin{aligned} \varvec{\tau }^T \mathbf {J}_q^T \mathbf {M}\mathbf {J}_q \varvec{\tau }- 2 \varvec{\tau }^T \mathbf {J}_q^T \mathbf {J}_c^T \mathbf {f}_{vg} + \mathbf {f}_{vg}^T \mathbf {J}_c \mathbf {M}^{-1} \mathbf {J}_c^T \mathbf {f}_{vg} \le 1 \, . \end{aligned}$$ Now, if we prove that the middle term in the above inequality is zero, given that the third term is always positive, the right hand side of (30) will be concluded. To prove that, we substitute \(\mathbf {f}_{vg}\) from (5) to (38), which turns the middle term into $$\begin{aligned} 2 \varvec{\tau }^T \mathbf {J}_q^T \mathbf {J}_c^T (\mathbf {J}_c \mathbf {M}^{-1} \mathbf {J}_c^T)^{-1} (\dot{\mathbf {J}}_c {\dot{\mathbf {q}}} - \mathbf {J}_c \mathbf {M}^{-1} \mathbf {h}) \end{aligned}$$ Ignoring the velocity and torque parts and also replacing \(\mathbf {J}_q\) from (10), yields $$\begin{aligned} \mathbf {B}^T \mathbf {N}_{c_M} \mathbf {M}^{-1} \mathbf {J}_c^T (\mathbf {J}_c \mathbf {M}^{-1} \mathbf {J}_c^T)^{-1} = \mathbf {B}^T \mathbf {N}_{c_M} \mathbf {J}_{c_M}^\# = 0, \end{aligned}$$ which proves the claim. OpenAccess This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Azad, M., Babič, J. & Mistry, M. Effects of the weighting matrix on dynamic manipulability of robots. Auton Robot 43, 1867–1879 (2019). https://doi.org/10.1007/s10514-018-09819-y Manipulability Operational space
CommonCrawl
Characteristic on the Heating Deformation of Sleeve by Heating Method Youn, Il-Joong;Lyu, Sung-Ki;An, Chang-Woo;Ahn, In-Hyo 1 Nowadays, out of other transmission parts, the sleeve is getting more and more important part for exact and smooth shifting from gear ratio change whenever drivers are needed. To exact and smooth shifting when drivers are needed, all the parts connected with gear shifting should be machined exactly and having dimensions designers are intended. Especially, in case of the sleeve that the most important functional part to shift from gear ratio change that drivers are intended, it needs high precision grade and quality in both sides runout and outer dia runout as well as inner spline small dia & large dia. Because it's assembled with the synchro hub spline and shifted directly with the mating cone. So, it should be applied the hear treatment(hereinafter referred to H.T.M.T) to prevent the friction and percussion loss from shifting with mating cone. At this time, the deformation problems are raised from almost H.T.M.T. process and it makes the inferior part. A Study of Corrosion Resistance Improvement for Cr-Mo Steel in Long Term Service Jin, Yeung-Jun 8 It is no wonder that mechanical structures are accompanied by problems related to corrosion after being exposed to long hours of work. Corrosion of mechanical structures has been the most serious problem in the field of industry. The present study employed a laser beam irradiation test to improve the corrosion resistance of degraded Cr-Mo steel, which was used for more than 60,000 hours. To find the optimum irradiation test condition for the corrosion resistance of degraded Cr-Mo steel, hardness and residual stress measurements, micro-structural observation, and the electrochemical potentiokinetic reactivation (EPR) tests were performed with changes in laser beam test conditions including laser beam output, diameter, and velocity. Thus, the present study indicates that the optimum test condition and absorption energy for a laser beam test need to be determined to enhance corrosion resistance of degraded Cr-Mo steel. A Knowledge-based Electrical Fire Cause Diagnosis System using Fuzzy Reasoning Lee, Jong-Ho;Kim, Doo-Hyun 16 This paper presents a knowledge-based electrical fire cause diagnosis system using the fuzzy reasoning. The cause diagnosis of electrical fires may be approached either by studying electric facilities or by investigating cause using precision instruments at the fire site. However, cause diagnosis methods for electrical fires haven't been systematized yet. The system focused on database(DB) construction and cause diagnosis can diagnose the causes of electrical fires easily and efficiently. The cause diagnosis system for the electrical fire was implemented with entity-relational DB systems using Access 2000, one of DB development tools. Visual Basic is used as a DB building tool. The inference to confirm fire causes is conducted on the knowledge-based by combined approach of a case-based and a rule-based reasoning. A case-based cause diagnosis is designed to match the newly occurred fire case with the past fire cases stored in a DB by a kind of pattern recognition. The rule-based cause diagnosis includes intelligent objects having fuzzy attributes and rules, and is used for handling knowledge about cause reasoning. A rule-based using a fuzzy reasoning has been adopted. To infer the results from fire signs, a fuzzy operation of Yager sum was adopted. The reasoning is conducted on the rule-based reasoning that a rule-based DB system built with many rules derived from the existing diagnosis methods and the expertise in fire investigation. The cause diagnosis system proposes the causes obtained from the diagnosis process and showed possibility of electrical fire causes. A Study for Development and Characteristics of Electrostatic Eliminator for Charged Particles Jung, Yong-Chul;Kim, Joon-Sam;Lee, Dong-Hoon 22 On this study, we developed the electrostatic eliminator for charged particles in manufacturing process. The characteristics of the electrostatic eliminator were investigated, which is two kinds. The first one is Electrical Corona Discharged Type Ionizer. The second one is Photo Ionizer in using soft X-ray. From the experiment, we have obtained the following results. In case of Electrical Corona Discharged Ionizer, neutralization efficiency of charged particles were approximately saturated to 98% over 6.0kV, but as it is non-explosion proof, can not be used in flammable particle treatment process. While in case of photo Ionizer in using soft X-Ray, neutralization efficiency of charged particles were approximately 95%, and more its structure is explosion proof, could be used in flammable particle treatment process. Development of Implemental Procedure for K-Risk Based Inspection Lee, Hern-Chang;Shin, Pyong-Sik;Lim, Dae-Sik;Kim, Tae-Ok 31 To apply easily the K-RBI program in domestic industries, an implemental procedure for K-RBI program was prepared. The K-RBI program had been developed, based on API-581 BRD. Therefore, through the usage of the developed K-RBI program and the implemental procedure, industries would have a benefit from reduced costs by modifying a frequency of an inspection efficiently. Also, the reliability of facilities would be maximized through improvement of an inspection method for facilities, considering its risk. A Experimental Study on the Characteristics of Gas Explosion due to Vent Shape and Size Chae, Soo-Hyun;Jung, Soo-Il;Lee, Young-Soon 38 The majority of both small and large-scale experiments on gas explosion have been carried out in the explosion instruments with cylindrical tubes of a high length/diameter ratio and vessels of a high height/length ratio, focusing on investigating the interaction between propagating flame and obstacles inside the tubes or vessels. The results revealed that there is a strong interaction between the propagating flame and turbulence formed after the flame passes the obstacle. However this paper focuses on analyzing the pressure impact or profile outside the vent in vented gas explosion in a partially confined chamber by performing gas explosion experiments in a reduced-scale experimental assembly properly constructed. This study has considered eight different cases in gas explosion based on variation of three kinds of parameters such as height of vessel, shape of the vent and vent size, and reveals that the large vessel with big size circle vent is more danger to the target than others because the overpressure is spread out faraway horizontally and vertically. Measurements of Autoigniton Temperature(AIT) and Time Lag of BTX(Benzene, Toluene, Xylenes) Ha, Dong-Myeong 45 The AITs(autoignition temperatures) describe the minimum temperature to which a substance must be heated, without the application of a flame or spark, which will cause that substance to ignite. The AITs are often used as a factor in determining the upper temperature limit for processing operations and conditions for handling, storage and transportation, and in determining potential fire hazard from accidental contact with hot surfaces. The measurement AITs are dependent upon many factors, namely initial temperature, pressure, volume, fuel/air stoichiometry, catalyst material, concentration of vapor, time lag. Therefore, the AITs reported by different ignition conditions are sometimes significantly different. This study measured the AITs of benzene, toluene and xylene isomers from time lag using AS1M E659-78 apparatus. The experimental ignition delay times were a good agreement with the calculated ignition delay times by the proposed equations wtih a few A.A.D.(average absolute deviation). Also The experimental AITs of benzene, toluene, o-xylene, m-xylene and p-xylene were $583^{\circ}C,\;547^{\circ}C,\;480^{\circ}C,\;587^{\circ}C,\;and\;557^{\circ}C$, respectively. A Development on Assessment Method of PVC Gloves Used in Pest Control Program Lee, Su-Gil;Lee, Nae-Woo 53 Following a Mediterranean fruit fly outbreak in South Australia, a bait spray program involving the pesticides like malathion(MAL) was carried out. During the application, dermal exposure was considered for the pest controllers wearing PVC gloves. However there is a lack of information about PVC glove performance like break through times and permeation rates with MAL, therefore, a new analytical method for HPLC-UV was developed. A standard permeation test cell was used in this study. From the results of this study, more than 96% solubility of MAL was provided at 30% isopropyl alcohol in distilled water as a collecting media. However, there was significant decomposition of MAL when the solutions were kept at over $50^{\circ}C$ for 2-3 hours. As a mobile phase, 50% acetonitrile water solution (pH 6.0) gave the greater sensitivity compared with other compositions of acetonitrile solution. The arm section of the gloves had shorter breakthrough times and higher permeation rates compared with the palm. There was no malathion solution breakthrough up to 24 hours using the 1% MAL working strength solution. When the temperature was changed from $22{\pm}1^{\circ}C\;to\;37{\pm}1^{\circ}C$, the breakthrough times were decreased by 14.5% on palm and 37.5% on arm, and permeation rates were increased significantly. The findings of this study indicate that further investigations on used gloves, periods of use and varying working conditions like tasks and seasons should be carried out to assess potential worst case scenarios. The Quantitative Assessment of Occupational Accident Reduction by the Injury Ratio Survey Regulations Ahn, Hong-Seob 59 Injury Ratio Survey Regulations(IRS) was introduced to the construction industry in the Republic of Korea since 1992 and brought positive effect on occupational accidents reduction. There were tremendous decrease of injury ratios and enforcing of contractors' safety organizations from the beginning of IRS. In spite of these positive results, there were some negative effects such as contractors' shrinking injury reports to keep good injury ratios since these figures had a great impact on pre-qualification stage of bidding when general contractors were competing for new construction projects. Thus, this study aims to devote on lessening construction injury and elimination of above negative impacts through the quantitative statistic analysis of the effectiveness on the occupational accidents prevention of IRS. According to this assessment, there were decrease of from 6.37% to 44.34% in the accident ratios compared to those of non-IRS groups and decrease of from 3.32% to 83.51 % in the accident ratios compared to those of general industry including the unreported accidents. Risk Assessment for Hazardous Construction Work Recognized by Workers Son, Ki-Sang;Lee, Shin-Jae 67 This study is to investigate the related materials such as domestic law regulation, research paper, research report, and the other material, and to suggest suitable counter measures, to find out hazard degree for its works of workers and work place through direct survey, in order to determine risk score of each hazardous work which is designated by the Government, without consideration of labour's consciousness against risk level at a site. Therefore, a new questionnaire survey related to the decision of risk level are made and distributed to find out what risk level each worker recognizes. Also, the authors tried to approach reasonable conclusions after discussing reasonability of qualification standard and improving ideas of worker at hazardous work places with worker, faculty member, H&S manager, labour union. And the results show hazard degrees by each work kind of the above: 3.75 for working with machinery, 3.7 for steel structure, 3.5 for operation of tower crane, 3.51 for retaining wall, 3.85 for form work, 3.46 for scaffolding are obtained. This quantified risk can be applied to establishing a reasonable system to keep safe against hazardous works. A Study on Resisting Force of H-Shaped Beam Using Glass Web Plate Son, Ki-Sang;Jeon, Chang-Hyun 73 Generally beam design depends on the yielding and maximum strength of each member varying with its section shape. Web plate of H-shape beam has not been substituted with glass plate, because it is known that its strength and heat properties are different and it is limited to substitute the existing steel web with glass element. Ceiling height of each room should be decreased with more than 60-80cm due to the beam. Differently from this condition, glass web beam has a good point to see through it and sunshine can be penetrate into the other size especially when it is installed as of outside wall. And also, it can be safer due to controlling room inside easier, if the strength is applicate. This study is to show some applicability after finding out the properties using the test. The test members with a size of $1,600{\times}200{\times}300{\times}9mm$ being SS41 rolled steel having THK 9mm flange while having 8,10mm and reinforced glass 12mm thickness is bonded with epoxy bond under the condition of temperature $28^{\circ}C$, humidity 50%, bonding power 24Mpa. It is show reinforced glass has 5 times of fracture stress more than the common glass but $50{\sim}150%$ difference between these 2 kinds of glass was shown. Reinforce glass did not support the original upper flange after fracture but the common glass did the upper flange after unloading. Generally reinforced glass is stronger than the common one but the common glass having a part of crack on it, compared with reinforced glass having the overall fracture could be more useful in case of needing ductility. The Shear Lag Phenomenon in Bundled Tube Structure According to the Arrangement of Structural Members Kim, Young-Chan;Kim, Hyun 81 The purpose of this study is to examine the effect of column spacing and beam size on the lateral displacement and shear lag phenomenon in bundled tube system. According to the parametric study in which the spacing of columns, the size of columns and girders in bundled tube were selected as a parameter, it is the most efficient to increase the size of the interior columns with the largest reduction of lateral drift if the steel tonnage of a frame can be increased. It was noticed that the shear lag was affected more by the exterior stiffness factor and ratio than by the interior ones when column spacing was changed, and when the size of column was changed, the reverse phenomenon was happened. And The change of column spacing affected shear lag, lateral drift, and tonnage more than that of column size or girder size. Analysis of Soil-Structure Interaction Considering Complicated Soil Profile Park, Jang-Ho 87 When a structure is constructed at the site composed of soil, the behavior of a structure is much affected by the characteristics of soil. Therefore, the effect of soil-structure interaction is an important consideration in the design of a structure at the site composed of soil. Precise analysis of soil-structure interaction requires a proper description of soil profile. However, most of approaches are nearly unpractical for soil exhibiting material discontinuity and complex geometry since those cannot consider precisely complicated soil profiles. To overcome these difficulties, an improved integration method is adopted and enables to integrate easily over an element with material discontinuity. As a result the mesh can be generated rapidly and highly structured, leading to regular and precise stiffness matrix. The influence of soil profile on the response is examined by the presented method. It is seen that the presented method can be easily used on soil-structure interaction problems with complicated soil profile and produce reliable results regardless of material discontinuities. A Reliability Analysis on the To-Box Reinforcement Method of PSC Beam Bridges The goal of this study is to show the way to increase the safety of deteriorated PSC beam bridges by the to-box reinforcing method. This method is to change the open girder section into the closed box section by connecting bottom flanges of neighboring PSC girders with the precast panels embedding PS tendons at the anchor block. The box section is composed of three concrete members with different casting ages, RC slab, PSC beam, precast panel. This different aging requires a time-dependent analysis considering construction sequences. Reliability index and failure probability are produced by the AFOSM reliability analysis. Transversely five schemes and longitudinally two schemes are considered. The full reinforcing scheme, transversely and longitudinally, shows the highest reliability index, but it requires more cost for retrofit. The partial reinforcing scheme 4, 4-1 are recommended in this study as the economically best scheme. A Study on the Strength Change of Used Pipe Support(III) Paik, Shin-Won;Choi, Soon-Ju 101 Formwork is a temporary structure that supports its weight and that of freshly placed concrete as well as construction live loads. In constructions site, pipe supports are usually used as shores which are consisted of the slab formwork. The strength of a pipe support is decreasing as it is frequently being used at the construction site. Among the accidents and failures that occur during concrete construction, there are many formwork failures which usually happen at the time concrete is being placed. The objective of this study is to find out the strength change of used pipe support and unused pipe supports according to aging. In this study, 2857 pipe supports were prepared. Among these pipe supports, 2337 pipe supports were lent to the construction companies free of charge. 520 pipe supports were kept on the outside. Compressive strength was measured by knife edge test and plate test at each 3 month. Test results show that the strength of unused pipe supports as well as used pipe supports was decreasing according to age, use frequency and load carrier, and the strength of used pipe supports was lower than the strength of unused pipe supports at the same age. So, the strength of used pipe supports from 191 days to present day was not satisfied the specification of KS F 8001. In this study, the strength of pipe support according to age, use frequency and load carrier was predicted using SPSS 12.0. It was known that the strength of pipe support using for 5 years was reduced to 42.8%. According to these results, it shows that attention has to be paid to formwork design using used pipe supports. Therefore, the present study results will be able to provide a finn base to prevent formwork collapses. Assessment of Long-Term Effectiveness of Speed Monitoring Displays on Speed Variation Lee, Sang-Soo 107 Speeding is one of major causes of frequent and severe traffic accidents in school zones. In this paper, the long-term effectiveness of speed monitoring displays (SMD) on speed variability was investigated through a field study in a school zone environment. The performance difference was discussed with several dependent variables including average speed, 85th percentile speed, and distribution of speed. Study results showed that the speed of vehicles began to reduce where the driver recognized the presence of an SMD, and about 12.4 percent (5.8km/h) of average speed was reduced at the SMD location. This speed reduction was observed throughout the day regardless of time of day. Statistical tests showed that the speed difference was statistically significant. In addition, analysis results of speed distribution showed that the number of speeding vehicle was greatly reduced after the SMD was installed, and 85th percentile speed also decreased from 54.3km1h to 45.0km/h. Therefore, it was concluded that the application of SMD produced a positive impact on the driver's behavior for a long period of times. Maximum Crack Width Control in Concrete Bridges Affected By Corrosion Cho, Tae-Jun 114 As one of the serviceability limit states, the prediction and control of crack width in reinforced concrete bridges or PSC bridges are very important for the design of durable structures. However, the current bridge design specifications do not provide quantitative information for the prediction and control of crack width affected by the initiation and propagation of corrosion. Considering life span of concrete bridges, an improved control equation about the crack width affected by time-dependent general corrosion is proposed. The developed corrosion and crack width control models can be used for the design and the maintenance of prestressed and non-prestressed reinforcements by varying time, w/c, cover depth, and geometries of the sections. It can also help the rational criteria for the quantitative management and the prediction of remaining life of concrete structures. Psychophysical Load for Females Depending on Arm Posture, Repetition of Wrist Motion and External Load Kee, Do-Hyung 122 This study investigated effect of arm posture, wrist motion repetition and external load on perceived discomfort through an experiment. Eleven female college students participated in the experiment, where shoulder, elbow and wrist motion, wrist motion repetition, and external load were used as independent variables. The results showed that only external load had a significant effect on perceived discomfort. The perceived discomfort linearly increased with external load. Based on the results of this and the previous study for males, it was concluded that effect of external load on perceived discomfort was larger than that of other posture and motion repetition related variables. This implies that effect of external load is the most important factor considered in the first place when assessing postural load. Development of a Road-map for Promoting Product Safety Standards Lim, Hyeon-Kyo;Ko, Byung-In 127 In 2002 Product Liability Act newly got into effect in Korea so that efforts for Product Safety got a new chance to promote safety standardization. Under the supervision of the Korean Agency for Technology and Standards (ATS) and the Korea Standards Association (KSA), the enterprise titled "Standardization of Product Safety" took the first step in 2000. Thenceforth a lot of standards and guidelines for product safety have been developed. The results of the enterprise were in the type of technical manual as well as report, technical guidelines, and specific technical safety standards. In this paper, the authors narrated those sequential efforts for Product Safety, and introduced the basic concept on which standardization of Product Safety Management System was conducted and individual safety standard has been developed. Based on this systematic concept, a global road-map as well as specific road-maps for developing safety standards in individual industry were supplied. Finally, suggestions for proceeding to the whole risk management system including other risky factors were appended. A Validity Verification of Human Error Probability using a Fuzzy Model Jang, Tong-Il;Lee, Yong-Hee;Lim, Hyeon-Kyo 137 Quantification of error possibility, in an HRA process, should be performed so that the result of the qualitative analysis can be utilized in other areas in conjunction with overall safety estimation results. And also, the quantification is an essential process to analyze the error possibility in detail and to obtain countermeasures for the errors through screening procedures. In previous studies for the quantification of error possibility, nominal values were assigned by the experts' judgements and utilized as corresponding probabilities. The values assigned by experts' experiences and judgements, however, require verifications on their reliability. In this study, the validity of new error possibility values in new MCR design was verified by using the Onisawa's model which utilizes fuzzy linguistic values to estimate human error probabilities. With the model of error probabilities are represented as analyst's estimations and natural language expression instead of numerical values. As results, the experts' estimation values about error probabilities are well agreed to the existing error probability estimation model. Thus, it was concluded that the occurrence probabilities of errors derived from the human error analysis process can be assessed by nominal values suggested in the previous studies. It is also expected that our analysis method can supplement the conventional HRA method because the nominal values are based on the consideration of various influencing factors such as PSFs.
CommonCrawl
Six announcements I did a podcast interview with Julia Galef for her series "Rationally Speaking." See also here for the transcript (which I read rather than having to listen to myself stutter). The interview is all about Aumann's Theorem, and whether rational people can agree to disagree. It covers a lot of the same ground as my recent post on the same topic, except with less technical detail about agreement theory and more … well, agreement. At Julia's suggestion, we're planning to do a follow-up podcast about the particular intractability of online disagreements. I feel confident that we'll solve that problem once and for all. (Update: Also check out this YouTube video, where Julia offers additional thoughts about what we discussed.) When Julia asked me to recommend a book at the end of the interview, I picked probably my favorite contemporary novel: The Mind-Body Problem by Rebecca Newberger Goldstein. Embarrassingly, I hadn't realized that Rebecca had already been on Julia's show twice as a guest! Anyway, one of the thrills of my life over the last year has been to get to know Rebecca a little, as well as her husband, who's some guy named Steve Pinker. Like, they both live right here in Boston! You can talk to them! I was especially pleased two weeks ago to learn that Rebecca won the National Humanities Medal—as I told Julia, Rebecca Goldstein getting a medal at the White House is the sort of thing I imagine happening in my ideal fantasy world, making it a pleasant surprise that it happened in this one. Huge congratulations to Rebecca! The NSA has released probably its most explicit public statement so far about its plans to move to quantum-resistant cryptography. For more see Bruce Schneier's Crypto-Gram. Hat tip for this item goes to reader Ole Aamot, one of the only people I've ever encountered whose name alphabetically precedes mine. Last Tuesday, I got to hear Ayaan Hirsi Ali speak at MIT about her new book, Heretic, and then spend almost an hour talking to students who had come to argue with her. I found her clear, articulate, and courageous (as I guess one has to be in her line of work, even with armed cops on either side of the lecture hall). After the shameful decision of Brandeis in caving in to pressure and cancelling Hirsi Ali's commencement speech, I thought it spoke well of MIT that they let her speak at all. The bar shouldn't be that low, but it is. From far away on the political spectrum, I also heard Noam Chomsky talk last week (my first time hearing him live), about the current state of linguistics. Much of the talk, it struck me, could have been given in the 1950s with essentially zero change (and I suspect Chomsky would agree), though a few parts of it were newer, such as the speculation that human languages have many of the features they do in order to minimize the amount of computation that the speaker needs to perform. The talk was full of declarations that there had been no useful work whatsoever on various questions (e.g., about the evolutionary function of language), that they were total mysteries and would perhaps remain total mysteries forever. Many of you have surely heard by now that Terry Tao solved the Erdös Discrepancy Problem, by showing that for every infinite sequence of heads and tails and every positive integer C, there's a positive integer k such that, if you look at the subsequence formed by every kth flip, there comes a point where the heads outnumber tails or vice versa by at least C. This resolves a problem that's been open for more than 80 years. For more details, see this post by Timothy Gowers. Notably, Tao's proof builds, in part, on a recent Polymath collaborative online effort. It was a big deal last year when Konev and Lisitsa used a SAT-solver to prove that there's always a subsequence with discrepancy at least 3; Tao's result now improves on that bound by ∞. Posted in Announcements, Nerd Interest, Quantum | 75 Comments » Bell inequality violation finally done right A few weeks ago, Hensen et al., of the Delft University of Technology and Barcelona, Spain, put out a paper reporting the first experiment that violates the Bell inequality in a way that closes off the two main loopholes simultaneously: the locality and detection loopholes. Well, at least with ~96% confidence. This is big news, not only because of the result itself, but because of the advances in experimental technique needed to achieve it. Last Friday, two renowned experimentalists—Chris Monroe of U. of Maryland and Jungsang Kim of Duke—visited MIT, and in addition to talking about their own exciting ion-trap work, they did a huge amount to help me understand the new Bell test experiment. So OK, let me try to explain this. While some people like to make it more complicated, the Bell inequality is the following statement. Alice and Bob are cooperating with each other to win a certain game (the "CHSH game") with the highest possible probability. They can agree on a strategy and share information and particles in advance, but then they can't communicate once the game starts. Alice gets a uniform random bit x, and Bob gets a uniform random bit y (independent of x). Their goal is to output bits, a and b respectively, such that a XOR b = x AND y: in other words, such that a and b are different if and only if x and y are both 1. The Bell inequality says that, in any universe that satisfies the property of local realism, no matter which strategy they use, Alice and Bob can win the game at most 75% of the time (for example, by always outputting a=b=0). What does local realism mean? It means that, after she receives her input x, any experiment Alice can perform in her lab has a definite result that might depend on x, on the state of her lab, and on whatever information she pre-shared with Bob, but at any rate, not on Bob's input y. If you like: a=a(x,w) is a function of x and of the information w available before the game started, but is not a function of y. Likewise, b=b(y,w) is a function of y and w, but not of x. Perhaps the best way to explain local realism is that it's the thing you believe in, if you believe all the physicists babbling about "quantum entanglement" just missed something completely obvious. Clearly, at the moment two "entangled" particles are created, but before they separate, one of them flips a tiny coin and then says to the other, "listen, if anyone asks, I'll be spinning up and you'll be spinning down." Then the naïve, doofus physicists measure one particle, find it spinning down, and wonder how the other particle instantly "knows" to be spinning up—oooh, spooky! mysterious! Anyway, if that's how you think it has to work, then you believe in local realism, and you must predict that Alice and Bob can win the CHSH game with probability at most 3/4. What Bell observed in 1964 is that, even though quantum mechanics doesn't let Alice send a signal to Bob (or vice versa) faster than the speed of light, it still makes a prediction about the CHSH game that conflicts with local realism. (And thus, quantum mechanics exhibits what one might not have realized beforehand was even a logical possibility: it doesn't allow communication faster than light, but simulating the predictions of quantum mechanics in a classical universe would require faster-than-light communication.) In particular, if Alice and Bob share entangled qubits, say $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}},$$ then there's a simple protocol that lets them violate the Bell inequality, winning the CHSH game ~85% of the time (with probability (1+1/√2)/2 > 3/4). Starting in the 1970s, people did experiments that vindicated the prediction of quantum mechanics, and falsified local realism—or so the story goes. The violation of the Bell inequality has a schizophrenic status in physics. To many of the physicists I know, Nature's violating the Bell inequality is so trivial and obvious that it's barely even worth doing the experiment: if people had just understood and believed Bohr and Heisenberg back in 1925, there would've been no need for this whole tiresome discussion. To others, however, the Bell inequality violation remains so unacceptable that some way must be found around it—from casting doubt on the experiments that have been done, to overthrowing basic presuppositions of science (e.g., our own "freedom" to generate random bits x and y to send to Alice and Bob respectively). For several decades, there was a relatively conservative way out for local realist diehards, and that was to point to "loopholes": imperfections in the existing experiments which meant that local realism was still theoretically compatible with the results, at least if one was willing to assume a sufficiently strange conspiracy. Fine, you interject, but surely no one literally believed these little experimental imperfections would be the thing that would rescue local realism? Not so fast. Right here, on this blog, I've had people point to the loopholes as a reason to accept local realism and reject the reality of quantum entanglement. See, for example, the numerous comments by Teresa Mendes in my Whether Or Not God Plays Dice, I Do post. Arguing with Mendes back in 2012, I predicted that the two main loopholes would both be closed in a single experiment—and not merely eventually, but in, like, a decade. I was wrong: achieving this milestone took only a few years. Before going further, let's understand what the two main loopholes are (or rather, were). The locality loophole arises because the measuring process takes time and Alice and Bob are not infinitely far apart. Thus, suppose that, the instant Alice starts measuring her particle, a secret signal starts flying toward Bob's particle at the speed of light, revealing her choice of measurement setting (i.e., the value of x). Likewise, the instant Bob starts measuring his particle, his doing so sends a secret signal flying toward Alice's particle, revealing the value of y. By the time the measurements are finished, a few microseconds later, there's been plenty of time for the two particles to coordinate their responses to the measurements, despite being "classical under the hood." Meanwhile, the detection loophole arises because in practice, measurements of entangled particles—especially of photons—don't always succeed in finding the particles, let alone ascertaining their properties. So one needs to select those runs of the experiment where Alice and Bob both find the particles, and discard all the "bad" runs where they don't. This by itself wouldn't be a problem, if not for the fact that the very same measurement that reveals whether the particles are there, is also the one that "counts" (i.e., where Alice and Bob feed x and y and get out a and b)! To someone with a conspiratorial mind, this opens up the possibility that the measurement's success or failure is somehow correlated with its result, in a way that could violate the Bell inequality despite there being no real entanglement. To illustrate, suppose that at the instant they're created, one entangled particle says to the other: "listen, if Alice measures me in the x=0 basis, I'll give the a=1 result. If Bob measures you in the y=1 basis, you give the b=1 result. In any other case, we'll just evade detection and count this run as a loss." In such a case, Alice and Bob will win the game with certainty, whenever it gets played at all—but that's only because of the particles' freedom to choose which rounds will count. Indeed, by randomly varying their "acceptable" x and y values from one round to the next, the particles can even make it look like x and y have no effect on the probability of a round's succeeding. Until a month ago, the state-of-the-art was that there were experiments that closed the locality loophole, and other experiments that closed the detection loophole, but there was no single experiment that closed both of them. To close the locality loophole, "all you need" is a fast enough measurement on photons that are far enough apart. That way, even if the vast Einsteinian conspiracy is trying to send signals between Alice's and Bob's particles at the speed of light, to coordinate the answers classically, the whole experiment will be done before the signals can possibly have reached their destinations. Admittedly, as Nicolas Gisin once pointed out to me, there's a philosophical difficulty in defining what we mean by the experiment being "done." To some purists, a Bell experiment might only be "done" once the results (i.e., the values of a and b) are registered in human experimenters' brains! And given the slowness of human reaction times, this might imply that a real Bell experiment ought to be carried out with astronauts on faraway space stations, or with Alice on the moon and Bob on earth (which, OK, would be cool). If we're being reasonable, however, we can grant that the experiment is "done" once a and b are safely recorded in classical, macroscopic computer memories—in which case, given the speed of modern computer memories, separating Alice and Bob by half a kilometer can be enough. And indeed, experiments starting in 1998 (see for example here) have done exactly that; the current record, unless I'm mistaken, is 18 kilometers. (Update: I was mistaken; it's 144 kilometers.) Alas, since these experiments used hard-to-measure photons, they were still open to the detection loophole. To close the detection loophole, the simplest approach is to use entangled qubits that (unlike photons) are slow and heavy and can be measured with success probability approaching 1. That's exactly what various groups did starting in 2001 (see for example here), with trapped ions, superconducting qubits, and other systems. Alas, given current technology, these sorts of qubits are virtually impossible to move miles apart from each other without decohering them. So the experiments used qubits that were close together, leaving the locality loophole wide open. So the problem boils down to: how do you create long-lasting, reliably-measurable entanglement between particles that are very far apart (e.g., in separate labs)? There are three basic ideas in Hensen et al.'s solution to this problem. The first idea is to use a hybrid system. Ultimately, Hensen et al. create entanglement between electron spins in nitrogen vacancy centers in diamond (one of the hottest—or coolest?—experimental quantum information platforms today), in two labs that are about a mile away from each other. To get these faraway electron spins to talk to each other, they make them communicate via photons. If you stimulate an electron, it'll sometimes emit a photon with which it's entangled. Very occasionally, the two electrons you care about will even emit photons at the same time. In those cases, by routing those photons into optical fibers and then measuring the photons, it's possible to entangle the electrons. Wait, what? How does measuring the photons entangle the electrons from whence they came? This brings us to the second idea, entanglement swapping. The latter is a famous procedure to create entanglement between two particles A and B that have never interacted, by "merely" entangling A with another particle A', entangling B with another particle B', and then performing an entangled measurement on A' and B' and conditioning on its result. To illustrate, consider the state $$ \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} \otimes \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} $$ and now imagine that we project the first and third qubits onto the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}.$$ If the measurement succeeds, you can check that we'll be left with the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}$$ in the second and fourth qubits, even though those qubits were not entangled before. So to recap: these two electron spins, in labs a mile away from each other, both have some probability of producing a photon. The photons, if produced, are routed to a third site, where if they're both there, then an entangled measurement on both of them (and a conditioning on the results of that measurement) has some nonzero probability of causing the original electron spins to become entangled. But there's a problem: if you've been paying attention, all we've done is cause the electron spins to become entangled with some tiny, nonzero probability (something like 6.4×10-9 in the actual experiment). So then, why is this any improvement over the previous experiments, which just directly measured faraway entangled photons, and also had some small but nonzero probability of detecting them? This leads to the third idea. The new setup is an improvement because, whenever the photon measurement succeeds, we know that the electron spins are there and that they're entangled, without having to measure the electron spins to tell us that. In other words, we've decoupled the measurement that tells us whether we succeeded in creating an entangled pair, from the measurement that uses the entangled pair to violate the Bell inequality. And because of that decoupling, we can now just condition on the runs of the experiment where the entangled pair was there, without worrying that that will open up the detection loophole, biasing the results via some bizarre correlated conspiracy. It's as if the whole experiment were simply switched off, except for those rare lucky occasions when an entangled spin pair gets created (with its creation heralded by the photons). On those rare occasions, Alice and Bob swing into action, measuring their respective spins within the brief window of time—about 4 microseconds—allowed by the locality loophole, seeking an additional morsel of evidence that entanglement is real. (Well, actually, Alice and Bob swing into action regardless; they only find out later whether this was one of the runs that "counted.") So, those are the main ideas (as well as I understand them); then there's lots of engineering. In their setup, Hensen et al. were able to create just a few heralded entangled pairs per hour. This allowed them to produce 245 CHSH games for Alice and Bob to play, and to reject the hypothesis of local realism at ~96% confidence. Jungsang Kim explained to me that existing technologies could have produced many more events per hour, and hence, in a similar amount of time, "particle physics" (5σ or more) rather than "psychology" (2σ) levels of confidence that local realism is false. But in this type of experiment, everything is a tradeoff. Building not one but two labs for manipulating NV centers in diamond is extremely onerous, and Hensen et al. did what they had to do to get a significant result. The basic idea here, of using photons to entangle longer-lasting qubits, is useful for more than pulverizing local realism. In particular, the idea is a major part of current proposals for how to build a scalable ion-trap quantum computer. Because of cross-talk, you can't feasibly put more than 10 or so ions in the same trap while keeping all of them coherent and controllable. So the current ideas for scaling up involve having lots of separate traps—but in that case, one will sometimes need to perform a Controlled-NOT, or some other 2-qubit gate, between a qubit in one trap and a qubit in another. This can be achieved using the Gottesman-Chuang technique of gate teleportation, provided you have reliable entanglement between the traps. But how do you create such entanglement? Aha: the current idea is to entangle the ions by using photons as intermediaries, very similar in spirit to what Hensen et al. do. At a more fundamental level, will this experiment finally convince everyone that local realism is dead, and that quantum mechanics might indeed be the operating system of reality? Alas, I predict that those who confidently predicted that a loophole-free Bell test could never be done, will simply find some new way to wiggle out, without admitting the slightest problem for their previous view. This prediction, you might say, is based on a different kind of realism. Posted in Bell's Theorem? But a Flesh Wound!, Quantum | 158 Comments » Ask Me Anything: Diversity Edition With the fall semester imminent, and by popular request, I figured I'd do another Ask Me Anything (see here for the previous editions). This one has a special focus: I'm looking for questions from readers who consider themselves members of groups that have historically been underrepresented in the Shtetl-Optimized comments section. Besides the "obvious"—e.g., women and underrepresented ethnic groups—other examples might include children, traditionally religious people, jocks, liberal-arts majors… (but any group that includes John Sidles is probably not an example). If I left out your group, please go ahead and bring it to my and your fellow readers' attention! My overriding ideal in life—what is to me as Communism was to Lenin, as Frosted Flakes are to Tony the Tiger—is people of every background coming together to discover and debate universal truths that transcend their backgrounds. So few things have ever stung me more than accusations of being a closed-minded ivory-tower elitist white male nerd etc. etc. Anyway, to anyone who's ever felt excluded here for whatever reason, I hope this AMA will be taken as a small token of goodwill. Similar rules apply as to my previous AMAs: Only one question per person. No multi-part questions, or questions that require me to read a document or watch a video and then comment on it. Questions need not have anything to do with your underrepresented group (though they could). Math, science, futurology, academic career advice, etc. are all fine. But please be courteous; anything gratuitously nosy or hostile will be left in the moderation queue. I'll stop taking further questions most likely after 24 hours (I'll post a warning before closing the thread). Update (Sep. 6): For anyone from the Boston area, or planning to visit it, I have an important piece of advice. Do not ever, under any circumstances, attempt to visit Walden Pond, and tell everyone you know to stay away. After we spent 40 minutes driving there with a toddler, the warden literally screamed at us to go away, that the park was at capacity. It wasn't an issue of parking: even if we'd parked elsewhere, we just couldn't go. Exceptions were made for the people in front of us, but not for us, the ones with the 2-year-old who'd been promised her weekend outing would be to meet her best friend at Walden Pond. It's strangely fitting that what for Thoreau was a place of quiet contemplation, is today purely a site of overcrowding and frustration. Another Update: OK, no new questions please, only comments on existing questions! I'll deal with the backlog later today. Thanks to everyone who contributed. Posted in Ask Me Anything | 185 Comments » You are currently browsing the Shtetl-Optimized weblog archives for September, 2015.
CommonCrawl
communications chemistry Atomic structure observations and reaction dynamics simulations on triple phase boundaries in solid-oxide fuel cells Computational engineering of the oxygen electrode-electrolyte interface in solid oxide fuel cells Kaiming Cheng, Huixia Xu, … Ming Chen Microscopic mechanism of biphasic interface relaxation in lithium iron phosphate after delithiation Shunsuke Kobayashi, Akihide Kuwabara, … Yuichi Ikuhara Review on modeling of the anode solid electrolyte interphase (SEI) for lithium-ion batteries Aiping Wang, Sanket Kadam, … Yue Qi Unlocking the passivation nature of the cathode–air interfacial reactions in lithium ion batteries Lianfeng Zou, Yang He, … Chongmin Wang Fictitious phase separation in Li layered oxides driven by electro-autocatalysis Jungjin Park, Hongbo Zhao, … William C. Chueh Comparison of electrochemical impedance spectra for electrolyte-supported solid oxide fuel cells (SOFCs) and protonic ceramic fuel cells (PCFCs) Hirofumi Sumi, Hiroyuki Shimada, … Koji Amezawa Non-equilibrium metal oxides via reconversion chemistry in lithium-ion batteries Xiao Hua, Phoebe K. Allan, … Andrew L. Goodwin Atomic origins of water-vapour-promoted alloy oxidation Langli Luo, Mao Su, … Chongmin Wang Atomic-scale unveiling of multiphase evolution during hydrated Zn-ion insertion in vanadium oxide Pilgyu Byeon, Youngjae Hong, … Sung-Yoon Chung Shu-Sheng Liu ORCID: orcid.org/0000-0002-7713-35791 na1, Leton C. Saha ORCID: orcid.org/0000-0001-7721-78121,2 na1, Albert Iskandarov ORCID: orcid.org/0000-0002-4294-97703, Takayoshi Ishimoto1, Tomokazu Yamamoto4, Yoshitaka Umeno3, Syo Matsumura1,4 & Michihisa Koyama ORCID: orcid.org/0000-0003-4347-99231,5,6 Communications Chemistry volume 2, Article number: 48 (2019) Cite this article Catalytic mechanisms Characterization and analytical techniques Molecular dynamics The triple phase boundary (TPB) of metal, oxide, and gas phases in the anode of solid oxide fuel cells plays an important role in determining their performance. Here we explore the TPB structures from two aspects: atomic-resolution microscopy observation and reaction dynamics simulation. Experimentally, two distinct structures are found with different contact angles of metal/oxide interfaces, metal surfaces, and pore opening sizes, which have not previously been adopted in simulations. Reaction dynamics simulations are performed using realistic models for the hydrogen oxidation reaction (HOR) at the TPB, based on extensive development of reactive force field parameters. As a result, the activity of different structures towards HOR is clarified, and a higher activity is obtained on the TPB with smaller pore opening size. Three HOR pathways are identified: two types of hydrogen diffusion processes, and one type of oxygen migration process which is a new pathway. Among several types of fuel cells, the solid oxide fuel cell (SOFC) has the highest efficiency and could utilize various fuels1. The anode materials in SOFC are typically based on porous composites of Ni with oxides such as yttria-stabilized zirconia (YSZ) or doped ceria1,2. Linking the complex porous structure to the cell performance is a critical issue for the development of SOFC. Realistic three-dimensional (3D) microstructures were obtained after the introduction of focused ion beam-scanning electron microscopy (FIB-SEM) technique to the field of SOFC3,4,5,6. Meanwhile, numerical modeling of porous electrode characteristics has been extended from two-dimensional7,8 to 3D with the development of simulation methods, such as the lattice Boltzmann method8,9 and phase-field method10. Microscopically, the most important location related to the SOFC performance is the triple phase boundary (TPB), where the fuels are electrochemically oxidized11. The TPB structure and its local chemical environment determine the thermodynamics and kinetics of the electrochemical reaction. Numerous works have been conducted to understand the local activity and reaction mechanism at the TPB. Patterned anodes were fabricated as ideal models to allow easy evaluation of the relationship between the TPB length and electrochemical characteristics12,13,14,15 and to develop kinetic modeling methods14,15,16,17,18,19. However, both the experimental characteristics and modeling results show large variation among different reports20,21. For example, the current densities normalized by TPB length21 obtained by Bieberle et al.13 and Yao et al.15 are about three orders of magnitude higher compared to those by Mizusaki et al.12 while different rate-determining steps were reported, respectively, in those works including either H2 adsorption/desorption on Ni surface or removal of O2− from the YSZ surface13, charge transfer reactions coupled with H2O-related process15, and Ni surface hydrogen oxidation12. First-principles calculations were also applied to the TPB, nevertheless the deduced mechanism and estimated energy barriers differ significantly even when using similar structural models22,23,24,25,26. Recently, Liu et al.21 analyzed those deviations and pointed out the significant importance of elementary processes directly related to TPB at the atomic level. This encouraged us to trace the elementary reaction mechanism at TPB more carefully. Meanwhile, a reactive force field (ReaxFF) method has been developed to model the Ni/YSZ/pore TPB and deduce the hydrogen oxidation mechanism27. However, the Ni surface amorphization and Ni decohesion from YSZ are observed at 1250 K, which, we believe, may be due to a lack in the model stability or parameter reliability. The TPB strongly affects the properties (including electrocatalysis) of various supported metal catalysts. Owing to their small sizes and easier handlings for observation, the atomic structures of TPB in nano-catalysts, such as Ni/MgAl2O428, Pd/CeO229, Pt/SnO230, and Au/γ-Fe2O331 have been determined using transmission electron microscopy (TEM) or scanning TEM (STEM). In contrast, Ni/YSZ cermet in conventional SOFC is a bulk composite material whose particle size ranges from the submicrometer to micrometer. Its complex microstructure is challenging for the TEM specimen preparation32. Thus there has been no report on the atomic structure of TPBs in a practical cell, while significant experimental attention has been paid to the interfacial structures of Ni/YSZ33,34,35,36,37. Considering the above-mentioned problems and difficulties in the atomic-scale researches on the TPB of SOFC, herein we report our experimental observations of realistic atomic TPB structures. In addition, we develop ReaxFF for TPB. Based on the obtained results, we computationally deduce the reaction mechanism therein. Two atomic structures of TPB, different from the models used in previous theoretical calculations, are introduced in this work. Overall, our reactive molecular dynamics (MD) simulations show that the hydrogen oxidation reaction (HOR) mechanism is notably influenced by the TPB models. Additionally, we find that the HOR can proceed through a newly discovered reaction pathway. TPB structure observation In this work, the Ni/YSZ anode was fabricated through conventional processes38. For the microscopy observation, the pores in Ni/YSZ were infiltrated with epoxy resin. The TEM specimens were extracted from the resin-embedded bulk sample by the lift-out technique using an FIB-SEM and thinned by gallium ion beam with a final voltage as low as 3 kV. In order to remove the damaged layers and the redeposition on the surfaces as much as possible, post-FIB processing was conducted using an Ar ion-milling machine with low voltages from 900 to 600 V39. Even after such mild procedure, some TPB areas were damaged by the ion beam because the sputtering rate of resin was higher than that of the anode materials. Before showing the observed structures of TPB, we defined three angle parameters for the TPB. As shown in Supplementary Fig. 1, θNi, the so-called contact angle in metal/ceramic material, is the dihedral angle between the Ni/pore and Ni/YSZ boundaries taking ceramic as the substrate. θYSZ and θpore are defined in a similar way. Owing to the poor wettability between metallic Ni and ceramic YSZ, θNi was evaluated by Nelson et al. to be in the range of 140°–155°, based on 3D reconstruction40. Large values of θNi were also observed in our sample (Supplementary Fig. 2). The value of θYSZ falls into two main cases if we restrict the spatial scale to several nanometers around the TPB. One is when YSZ acts as a flat substrate to the Ni particle (θYSZ = 180°, Supplementary Fig. 1a), which corresponds to a small pore opening size (θpore) and is the dominant case in Supplementary Fig. 2. If the above-mentioned statistical values of θNi are adopted40, θpore should be in the range of 25°–40°. In the other case, YSZ does not extend in a straight way from the Ni/YSZ interface to the TPB area, and θYSZ is much smaller than 180° (Supplementary Fig. 1b), thereby a big pore opening size (θpore) is obtained with a value usually exceeding 90° as pointed out by red arrows in Supplementary Fig. 2. Hereafter, these two TPB structures will be referred to as TPB-1 and TPB-2 types, respectively. TPB-1 has a small θpore, and one such example is indicated by the yellow arrow in Supplementary Fig. 2. The TEM observation was conducted in two directions of YSZ: the [0\(\bar 1\)3] and [001] zone axes, which have an included angle of 18.4°. The corresponding orientation relationships between Ni and YSZ are Ni[12\(\bar 3\)]//YSZ[0\(\bar 1\)3] (Supplementary Fig. 3) and Ni[1\(\bar 1\)0]//YSZ[001] (Supplementary Fig. 4), respectively. The abrupt Ni/YSZ and resin/YSZ interfaces were maintained after rotation (Fig. 1a, b), which indicates that they were always parallel to the observation directions. However, the Ni/resin interface (Ni surface) shows an apparent slope (marked by an arrow in Fig. 1a), which becomes abrupt in Fig. 1b. This phenomenon indicates that only the latter observation direction was parallel to the Ni surface. The contact angle between Ni and YSZ (θNi) was measured from Fig. 1b to be 145°, which is in good agreement with the previously reported values40. Herein, θpore equals the supplementary angle of θNi, namely 35°. Atomic structure of triple phase boundary-1. a Low-magnification transmission electron microscopy (TEM) image at YSZ[0\(\bar 1\)3] zone axis. The arrow in a points out the slope feature of Ni surface in this observation direction. b Low-magnification high-angle annular dark-field–scanning TEM (HAADF-STEM) image at YSZ[001] zone axis. The dash arc in b stands for the contact angle (θNi). c High-resolution TEM image at YSZ[0\(\bar 1\)3] zone axis. The arrow in c denotes a pore after electron beam damage of the resin. d HAADF-STEM image at YSZ[001] zone axis. The arrow in d points to a thin layer of NiO on the Ni surface. Scale bars in a, b: 50 nm; in c, d: 1 nm The high-resolution TEM (HRTEM) image at the YSZ[0\(\bar 1\)3] zone axis is shown in Fig. 1c. The YSZ(200) plane is exposed to the gas phase and in contact with the Ni(111) plane. Along the Ni/YSZ interface, small steps could be detected that appeared blurred owing to the lattice overlap, as shown in Supplementary Fig. 5. The "steps" are where the Ni and YSZ are not perfectly parallel along the TEM observation direction or in the image plane. They are believed to be a part of the interfacial migration process that has been previously observed in NiO/YSZ41 and Ni/YSZ42 as well as other metal–ceramic systems such as Ag/ZnO43 and Cu/MgO44. During the baking and reduction processes of NiO/YSZ, the interfacial layers of both Ni(O) and YSZ are expected to migrate due to many factors, such as thermal expansion, lattice shrinkage during reduction, lattice misfit, oxygen bonds change, etc. The migration of Ni, O, and Zr(Y) should proceed toward thermodynamic equilibrium. The interfacial distance between the contacting Ni and YSZ layers was measured to be 0.238 nm from the HRTEM image after background filtering (Supplementary Fig. 5). This value is intermediate to the lattice distances of Ni(111) and YSZ(200) planes and close to the reported value of 0.244 nm for a metal–metal (Ni-Zr) featured interconnection33 that was observed in a Ni/YSZ specimen having the same orientation as TPB-1 and prepared by the reduction/ion milling procedures similar to those adopted in the present study. Namely, the oxygen tends to be lost from the Ni/YSZ interface in TPB-1, and we attribute this to the NiO reduction process rather than the ion damage during FIB preparation. The final voltage adopted by us in FIB is 3 kV, which will cause a damage depth of around 5 nm if the milled material is silicon45. Furthermore, we adopted low voltage post-polishing by using Ar ion during the cooling with liquid nitrogen. This process removed most of the damaged layers on the specimen surfaces, especially the non-resin areas. All those procedures resulted in a damaged layer of merely <3 nm, judging from the Ni or YSZ surfaces in Fig. 1 and Supplementary Fig. 6. The specimen is estimated to have a thickness of about 70 nm near the Ni surface (resin) according to the geometrical parameters of the slope in Fig. 1a and the known angles. Therefore, the thin damaged layer on the surface will not change the whole structure of the specimen. Actually, O loss from the Ni/YSZ interface is not surprising because the anode was reduced under a 2% humidified hydrogen environment. Reportedly, O-Ni and Zr-Ni bonds are energetically favorable35 and the O content at the interface may change with its partial pressure in the gas36. Thus the O content at the Ni/YSZ interface in TPB-1 is expected to be augmented with increasing oxygen partial pressure in the atmosphere. When the YSZ zone axis is changed to [001], the Ni surface can be indexed to (220) as shown in Fig. 1d. Actually, the dihedral angle between Ni(220) and Ni(111) planes in a single crystal is 35°, which is identical to θpore and confirms that the Ni surface facing the pore is the Ni(220) plane. Partial oxidation was found on the Ni surface (as pointed out by an arrow in Fig. 1d) but not at the TPB. Because of the damaged layers and thickness difference between the grain edges and bulk, the STEM contrast was weakened. We also found another region of interest with the same orientation and configuration as the one shown in Fig. 1d. In fact, its HRTEM image (Supplementary Fig. 6) exhibits the ion damage on the surfaces more clearly and supports the facet index. Therefore, we obtained the atomic structure near TPB, showing the Ni(220) and YSZ(200) surfaces to the pore at an interfacial orientation of Ni[1\(\bar 1\)0]//YSZ[001] and Ni(111)//YSZ(200). The same oriented interfacial structure was observed in the Ni/YSZ samples prepared by directionally solidified eutectics33 and molecular beam epitaxy34. This is, in fact, one of the most stable interfaces found in Ni/YSZ cermet37. Clear TEM images on TPB-2 were obtained from a specimen shown in Fig. 2 and Supplementary Fig. 7 with different magnifications. Apparently, this TPB has a larger θpore than that in TPB-1. An HRTEM image (Fig. 2b) was taken at the Ni[11\(\bar 2\)] zone axis. It was found that YSZ is close to the [001] zone axis, which can be seen more clearly from the diffraction patterns in Supplementary Fig. 7. The Ni(111) and YSZ(020) planes exposed to the resin (pore) and YSZ(200) planes act as one of the contact surfaces at the Ni/YSZ interface. θNi and θpore were measured to be 167° and 103° from the dihedral angles between Ni(111) and YSZ(200) and between Ni(111) and YSZ(020), respectively. Hence, θpore in TPB-2 is about two times larger than that in TPB-1. The Ni(2\(\bar 2\)0) planes display a dihedral angle of 76.8° to the Ni/YSZ interface. These planes show a lattice misfit of merely 0.4%, relative to the YSZ(020) plane, when taking into consideration that two layers of Ni(2\(\bar 2\)0) planes match with one layer of YSZ(020) plane. Such a good match would stabilize the interface, although the contact surface of Ni phase was estimated to be a high-index (957) plane. Previously, we have reported other high-index interfaces that ubiquitously exist in the conventional Ni/YSZ anode37. Later in this study, we will compare the structures of TPB-2 and TPB-1 in terms of their HOR activity. Atomic structure of triple phase boundary-2. a Low-magnification transmission electron microscopy (TEM) image. b High-resolution TEM image at Ni[11\(\bar 2\)] zone axis. Scale bars in a, b: 100 and 1 nm, respectively Force field development In order to perform reliable calculation for the Ni/YSZ/H2 systems, we optimized the parameters adopted in the previously published ReaxFF descriptions of Ni and YSZ27,46,47. POTFIT48 code, combined with LAMMPS49, was used to determine the new ReaxFF parameters for H-O-H, YSZ-H2, Ni-H2, Ni-H, Ni-YSZ, and O-O interactions, together with 24 additional energy values corresponding to the intermediate energy profiles of the HOR at TPB. The parameters were optimized to fit an extensive set of interaction energies in the Ni/YSZ/H2 systems, and the fitting results are shown in Fig. 3 and Supplementary Figs. 8–12. Figure 3a shows that the H-O-H equilibrium angle of the published ReaxFF has a deviation of about 4° compared with that of the first-principles results, whereas our developed ReaxFF shows only 1°. For the published ReaxFF, the energy of nondissociative adsorption of H2 on YSZ(Y-site) is overestimated by 64% in comparison with the first-principles data (Fig. 3b), whereas the overestimation obtained with our developed ReaxFF is reduced to 33%. Also, it is shown in Fig. 3c that the discrepancy of 0.6 Å obtained for the equilibrium Ni-H2 distance using the published ReaxFF has been eliminated with our ReaxFF. As shown in Fig. 3d, the magnitude of the interaction energy of Ni-YSZ interface at the equilibrium distance is underestimated by 86% with the published ReaxFF, while the underestimation is only 9% by using our ReaxFF. In summary, our developed ReaxFF parameters can reproduce the results of the first-principle calculations with much higher accuracy than the published ReaxFF27. More details on the way we evaluate the deviations between ReaxFF simulations and first-principles calculations can be found in Supplementary Notes 1, while the higher accuracy of the developed ReaxFF can be further proved by the comparison data in Table 1 and Supplementary Tables 1–11. In addition, the interactions of H-O-H, YSZ-H2, Ni-H2, Ni-H, Ni-YSZ, and O-O are described in Supplementary Notes 2–6 and the energies for the HOR are described in Supplementary Note 7. Interaction energies and angle-energy curves. a H-O-H angle energy as a function of angle (θ). b YSZ(111) (Y-site)–H2 interaction energy. c Ni(220)–H2 interaction energy. d Ni(111)–YSZ(111) interaction energy. b–d are plotted as a function of distance (r). The interaction energy and angle energy are expressed as relative to the equilibrium structure, through single-point energy calculations by changing (r) and (θ). The insets show the models that are used for the energy calculation. Balls represent hydrogen (H), oxygen (O), zirconium (Zr), yttrium (Y), and nickel (Ni) atoms. YSZ(111) slab with 3-layer Zr/Y and 6-layer O is used in b and the entire structure is shown in d Table 1 Comparison of average deviations for ReaxFF TPB modeling and TEM image simulation Based on the observed images, the initial atomic models were built in Material Studio by taking into account the actual lattice distances and crystal orientations. In an effort to validate the initial models, the QSTEM50 software was used to simulate the TEM images by the multislice method. The observed and simulated images as well as the model for TPB-1 are shown in Fig. 4a, b, d. For TPB-1, there was a good agreement between the simulated (Fig. 4b) and observed (Fig. 4a) images, except for the possible thinning and damage of Ni and YSZ surfaces in Fig. 4a. To clarify the interfacial distance between Ni and YSZ, we superimposed the model on the experimental image in Fig. 4c. In the experimental image, it was difficult to determine the exact TPB region. Therefore, we constructed two more models by increasing the number of Ni layers toward the gas phase, resulting in different interfacial distances of 1.9, 1.8, and 1.7 Å between Ni and O at the TPB region, hereafter referred to, respectively, as model-1 (Fig. 4 and Supplementary Fig. 13a), model-2 (Supplementary Fig. 13b), and model-3 (Supplementary Fig. 13c). The model of TPB-2 will be illustrated in a later section. Model structures of triple phase boundary-1. a Experimental image. b Simulated image by using the QSTEM software. c Atomistic model imposed on the experimental image. d Atomistic model containing Ni (1770 atoms) and YSZ (Zr1100Y200O2500, 8 mol% of yttria) Reaction dynamics Before simulating the reaction dynamics, we optimized the structures of TPB-1 by adding hydrogen (H) atoms at randomly selected atop sites of Ni surfaces. To investigate the hydrogen oxidation process, we prepared the model with adsorbed H assuming that the adsorption is sufficiently fast and is in equilibrium. The surface coverage of H is predicted to be 7%, which is the maximum coverage under the conditions of 1073 K and 1 atm with 100% H2 following the reported calculation method51. After the optimization, H atom on the Ni(220) surface is found to be at the long bridge site. For model-1, a single pathway was observed during the simulation as shown in Fig. 5a. In particular, H moved on the Ni(220) surface to react with O at the TPB site, resulting in the formation of OH (see the series of snapshots in Supplementary Fig. 14). This pathway is noted herein as HOR-I, which was reported by Vogler et al.18 using kinetic modeling and defined as an H spillover process. Subsequently, in their density functional theory (DFT) calculations Shishikin et al.22, Cucinotta et al.23, and Liu et al.26 concluded that the H spillover process is the most favorable pathway. Liu et al. also found that the reaction barrier of this process varies from 0.46 to 0.57 eV depending on the local structures of TPB sites. This process was also identified by employing ReaxFF-MD simulations27. For our simulation, a total of 3 OH species were formed after 700 ps. One of them moved to the YSZ surface near TPB with the distances of 2.7 and 2.1 Å from the nearest Ni and Zr atoms, respectively. The other two OH species gradually moved toward the Ni(220) surface sites, which was specified as the OH spillover or back-transfer in the kinetic study by Goodwin et al.19. In contrast, the OH spillover process was not favorable according to the above-mentioned DFT calculations22,23,26, while it was not perceived in the ReaxFF-MD simulations27. Reaction dynamics simulations for triple phase boundary-1. a Hydrogen oxidation reaction (HOR)-I pathway. b HOR-II pathway. The trajectory of hydrogen (H) starts from the arrow, and the circle denotes the final position where OH species is formed after a simulation period of 700 ps. The local environment near the formed OH species is presented in the magnified image, which is linked to the circle by dotted lines. c HOR-III pathway. A trajectory of oxygen (O) starts from the arrow, and the circle denotes the final position where OH species is formed after a simulation period of 700 ps. The local environment near the formed OH species is presented in the magnified image, which is linked to the circle by dotted lines. Trajectories are shown along the Ni[1\(\bar 1\)0] and YSZ[001] directions Two other HOR pathways were observed on model-2 during the simulation. One pathway is dominated by H atom incorporation into the interstitial site of the Ni lattice and diffusion to the interface between Ni and YSZ. In this case, H eventually combines with O at the Ni/YSZ interface to form OH. This pathway was labeled as HOR-II (Fig. 5b), which has been discussed theoretically by Weng et al.52. They concluded that a barrier of 0.89 eV needs to be overcome for H diffusion through the Ni bulk, while the surface diffusion barrier is only 0.1 eV. Nevertheless, experiments have proved the enhanced H diffusion and absorption in Ni lattice at high temperatures53,54. Another pathway, which is referred to as HOR-III, is associated with O atom migration from YSZ to the Ni(220) surface through the Ni bulk to react with H on Ni surface to form OH. A trajectory of O together with the snapshot of the formed OH is shown in Fig. 5c. A series of snapshots with time for HOR-II and HOR-III are provided in Supplementary Fig. 14. To validate the determined ReaxFF parameters for O migration into the Ni bulk, we calculated the NiO formation energy to be −0.8 eV, which is in reasonable agreement with the first-principles calculation result (−1.0 eV)55. Besides, we did not observe O migration into the Ni bulk during optimization (see Supplementary Fig. 15). This result supports the fact that O migration in Ni can occur at certain two-phase interfaces like TPB-1 under high temperature (see Supplementary Fig. 16). Experimentally, O migration through the Ni bulk had been reported for Ce-Zr oxide-supported Ni catalyst56. For model-3, two OH species are formed. One of them is formed through the HOR-III pathway and remains on the Ni surface site. The other OH, formed via HOR-I, is found to be 2.88 and 2.0 Å away from the nearest Ni and Zr, respectively. In short, we observed three pathways for the formation of OH in HOR after the above simulations on TPB-1, namely, HOR-I (H diffusion on Ni surface toward TPB), HOR-II (H diffusion through Ni bulk to Ni/YSZ interface), and HOR-III (O migration from YSZ to Ni surface through bulk transfer). For TPB-2 (Fig. 2b), the YSZ phase was not at the exact [001] zone axis. However, we adopted an ideal YSZ[001] model to represent the observed TPB structure for simplicity, and the model is presented in Supplementary Fig. 17. It should be noted that, owing to the high index (957) of the Ni surface, it is not worthy to increase the number of YSZ(020) layers toward the gas phase, as it may lead to improper contact between them at the TPB region. Therefore, we constructed only one model instead. After optimization, the H atom was located at the fcc site of Ni(111) surface. In this simulation, a single OH is formed through the HOR-I pathway. A trajectory of H is also shown in Supplementary Fig. 17. Note that the OH remains at TPB, having distances of 1.7 and 2.3 Å from the nearest Ni and Zr, respectively. The other two pathways (HOR-II and HOR-III) may also occur, because O and H migration into the Ni bulk is detected (Supplementary Fig. 18), indicating the relatively low activity of HOR through those processes compared to TPB-1. In order to clarify the impact of the TPB structures as well as the different reaction pathways, we repeated all calculations six more times by changing the H positions on the Ni surface. The resulting time-averaged number of OH normalized by the TPB length is summarized in Table 2. It is calculated as \(\left( {{\sum} {N_{{\mathrm{OH}}}{\mathrm{d}}t/t} } \right)\)/l, where NOH is the number of OH while dt, t, and l indicate the time step between frames, the total simulation time, and TPB length, respectively. The TPB lengths of TPB-1 and TPB-2 are 27.167 and 27.170 Å, respectively, which are almost the same. From TPB-1, the OH formations through the HOR-I pathway were found as nine, four, and five times during the simulations using model-1, model-2, and model-3, respectively. In addition, only one OH formation through the HOR-II pathway was observed in simulations using either model-1 or model-3, while model-2 exhibited three OH formations. Finally, the HOR-III pathway was not detected in model-1, whereas simulations using model-2 and model-3 displayed OH formation through the HOR-III pathway once and twice, respectively. By contrast, the TPB-2 simulations showed four and one OH formations via the HOR-I and HOR-III pathways, respectively. Therefore, our simulations indicate that TPB-1 was more active than TPB-2 and that the activity at TPB-1 was influenced by the different interfacial distances between Ni and O. The HOR-I pathway seems to be the easiest pathway regardless of the models, and this agrees well with many studies using DFT calulations22,23,26. Table 2 Normalized average production of OH Apart from the HOR pathways observed in this study, an O spillover process, namely O migration from YSZ to Ni surface leading to the formation of OH, was previously reported by using kinetic modeling method16,57,58. Later, this process was examined by Ammal et al.24 through DFT calculations and found to dominate over the H migration pathway. Fu et al. also investigated the O spillover process, coexisting with the H spillover process25. From our dynamics simulations, it can be concluded that the number of O migration was limited: for TPB-1 (model-1) four O atoms migrate, while only one O atom migrates during the simulation using TPB-2. Liu et al. also pointed out the importance of this process for the higher activity based on a kinetic study21. Therefore, it is indicated that the TPB-1 structure is favorable compared to TPB-2, while the relatively slow kinetics does not lead to the formation of OH species. Finally, we did not observe H2O formation during the simulations. This may be due to the rate-limiting step of the OH + H reaction, as well as the limited number of H atoms, the consumption of H atoms because of OH formation, or limited simulation time. DFT calulations22,23,26, ReaxFF-MD27, and kinetic modeling18 have concluded that H2O is formed near the TPB for HOR-I, while Goodwin et al. reported H2O formation on the Ni surface site19. In this study, MD simulations indicated that all pathways are possible for H2O formation. The motivation of this work is to understand the TPB on an atomic scale and simulate the HOR process on realistic models. First, we fabricated a conventional Ni/YSZ half-cell, and then we prepared high-quality TEM specimens through an FIB lift-out technique and moderate post-polishing. Finally, the TPB structures were observed by TEM and STEM, through which the visualization of the real atomic structure of TPB to the best of our knowledge is realized for the first time. Herein we demonstrated the existence of two TPB structures (TPB-1 and TPB-2), which are different from the structures assumed in the reported simulations. In TPB-1, Ni and YSZ formed a stable interface with an orientation of Ni[1\(\bar 1\)0]//YSZ[001] and Ni(111)//YSZ(200), and the Ni(220) and YSZ(200) surfaces were open to the pore. Near TPB-2, the Ni/YSZ interface was Ni(957)//YSZ(200) while Ni(111) and YSZ(020) were exposed to the pore. In these two structures, the angle (θpore) between the Ni and YSZ surfaces was very different, being 35° and 103°, respectively. After constructing models based on these realistic structures, we carried out ReaxFF simulations. In order to obtain reliable simulation data, we began with extensive development of ReaxFF parameters for Ni/YSZ/H2 systems to clarify the HOR at TPB. Compared to the results of first-principles calculations, our developed ReaxFF parameters produced much smaller average energy deviations that are only about one third of those obtained from published ReaxFF parameters as shown in Table 1. Using our own ReaxFF parameters and realistic models, three pathways for OH formation in HOR were observed during the reaction dynamics simulations: HOR-I (H diffusion on Ni surface toward TPB), HOR-II (H diffusion through Ni bulk to Ni/YSZ interface), and a new pathway HOR-III (O migration from YSZ to Ni surface through bulk transfer). Besides, the migration process of H and O was influenced by the TPB structures. The TPB-1 type structure was also found to be more active compared to TPB-2. Moreover, it was predicted that H2O may form at distinct sites on account of the OH back-transfer process. Our simulation results on the three models of TPB-1 and one model of TPB-2 indicate that the HOR pathway depends on many factors around the TPB, and it could be tuned by adjusting the local atomic structures. This finding agrees well with a recent study of dopant effect on HOR pathway, which changes with different cation dopants on ZrO259. In addition, there are many more reports on surface pathways like HOR-I than on the bulk pathways including HOR-II and HOR-III observed in our simulation. Therefore, we propose that more studies should be carried out both theoretically and experimentally on the bulk pathways. Finally, we would like to correlate our results to the anode performance and durability enhancement from three points of view. First, we found that HOR-1 pathway (H diffusion on Ni surface toward TPB) seems to be the easiest one regardless of TPB structures. Hence, the performance can be enhanced by increasing the TPB length3,6. This is actually a well-known fact and can be realized experimentally by adjusting the Ni/YSZ ratio, Ni grain refinement, Ni infiltration, and so on as long as the mechanical robustness is well maintained. Second, we observed a new pathway (HOR-III) that proceeds via the diffusion of O in the bulk Ni. During this process, Ni may be partially oxidized, especially under high fuel utilization conditions, which will lead to performance degradation and mechanical disruption. Therefore, care must be taken to ensure mild operation parameters. Dopants could also be added into Ni/YSZ cermet to suppress the Ni oxidation60. Third, we concluded that TPB-1 type has higher activity toward HOR than TPB-2 type. The TPB-1 type was also more often observed in our TEM analysis, and it represents Ni standing on a flat YSZ substrate at TPB with very stable interface. Hence, the performance and durability are supposed to be improved if more TPBs of TPB-1 type exist, which could be achieved by reducing the particle size ratio of Ni(O)/YSZ61 and enhancing the thermal treatment under the condition of avoiding excessive Ni coarsening. However, those motions should be considered globally because the anode performance is a comprehensive output of various factors. Cell preparation A YSZ (8 mol% yttria stabilized zirconia) electrolyte-supported half-cell was fabricated in a conventional way. First, NiO and YSZ with a weight ratio of 55:45 were ball-milled in ethanol. After drying the ethanol, the NiO/YSZ powder was grinded with 8 wt.% ethyl cellulose and α-terpineol solvent to make the slurry, which was then screen-printed onto the YSZ electrolyte support (thickness = 1 mm). The half cell was sintered at 1400 °C for 3 h. Finally, the reduction of NiO was carried out in humid H2 (2% H2O) at 800 °C for 2 h. TEM specimen preparation and atomic structure observation The pores of the porous electrode were first infiltrated by epoxy resin under vacuum in a CitoVac (Struers) system, so that the bulk cell could be readily handled in the next step. TEM specimen lift-out was performed using an FIB-SEM (HITACHI MI4000L) instrument. The specimen polishing was performed while the FIB voltage decreased from 30 to 3 kV. To remove the damaged/amorphous layers on the two sides of the specimen as much as possible, final polishing was carried out in an Ar ion milling machine (NanoMill Model 1040, Fischione) with a cold stage (−165 °C) at voltages between 0.6 and 1 kV. Structure observation at different scales was performed with JEOL JEM3200FSK (TEM and STEM modes, 300 kV) and JEOL JEM-ARM200F (TEM and STEM modes, 200 kV). Both machines were equipped with energy-dispersive X-ray spectroscopy detectors for elemental analysis. TEM image simulation STEM image simulations were performed using the QSTEM software (V2.22)50 based on the multislice method with the experimental parameters used in the JEOL ARM200F. The following electron beam parameters were employed: acceleration voltage = 200 kV, chromatic aberration CC = 1.6 mm, spherical aberration CS = 0.0011 mm, fifth-order spherical aberration C5 = 1.756 mm, convergence angle α = 24 mrad, and high-angle annular dark-field–STEM detector acceptance semi-angle of 90–370 mrad. During the microscope operation, the two-fold astigmatism and focus were continuously adjusted by the user and can be identified manually from the live imaging and fast Fourier transforms of the live image. However, in the simulations the two-fold astigmatism and high-order aberrations were neglected. Force field theory The ReaxFF62 is based on the concept of bond order (BO), which is calculated according to the following equation: $$\begin{array}{*{20}{l}} {{\mathrm{BO}}_{ij}^\prime } \hfill & = \hfill & {{\mathrm{BO}}_{ij}^\sigma + {\mathrm{BO}}_{ij}^\pi + {\mathrm{BO}}_{ij}^{\pi \pi }} \hfill \\ {} \hfill & = \hfill & {{\mathrm{exp}}\left[ {p_{{\mathrm{bo}}1}\left( {\frac{{r_{ij}}}{{r_0^\sigma }}} \right)^{p_{{\mathrm{bo}}2}}} \right] + {\mathrm{exp}}\left[ {p_{{\mathrm{bo}}3}\left( {\frac{{r_{ij}}}{{r_0^\pi }}} \right)^{p_{{\mathrm{bo}}4}}} \right]} \hfill \\ {} \hfill & {} \hfill & { + \, {\mathrm{exp}}\left[ {p_{{\mathrm{bo}}5}\left( {\frac{{r_{ij}}}{{r_0^{\pi \pi }}}} \right)^{p_{{\mathrm{bo}}6}}} \right]{\mathrm{,}}} \hfill \end{array}$$ where \(r_0^\sigma\), \(r_0^\pi\), and \(r_0^{\pi \pi }\) are respectively the single, double, and triple bonds as a function of interatomic distances rij. \(p_{{\mathrm{bo}}1}\), \(p_{{\mathrm{bo}}2}\), \(p_{{\mathrm{bo}}3}\), \(p_{{\mathrm{bo}}4}\), \(p_{{\mathrm{bo}}5}\), and \(p_{{\mathrm{bo}}6}\) are parameters of the ReaxFF. The total system energy consists of several partial energy terms that depend on the BOs, such as the bond energy (Ebond) which in ReaxFF is given as: $$E_{{\mathrm{bond}}} = - D_{\mathrm{e}}^\sigma {\mathrm{BO}}_{ij}^\sigma {\kern 1pt} {\mathrm{exp}}\left[ {p_{{\mathrm{be}}1}(1 - ({\mathrm{BO}}_{ij}^\sigma )^{p_{{\mathrm{be}}2}})} \right] - D_{\mathrm{e}}^\pi {\mathrm{BO}}_{ij}^\pi - D_{\mathrm{e}}^{\pi \pi }{\mathrm{BO}}_{ij}^{\pi \pi }{\mathrm{,}}$$ where De is the dissociation energy for each bond type. pbe1 and pbe2 are parameters of the ReaxFF. Besides Ebond, other contributors to the total energy include: the lone pair electrons, under- and over-coordination terms, valence angle energy, torsion energy terms, as well as van der Waals (vdW) and Coulomb interactions, which do not depend on the BOs. Details of the force field can be found elsewhere62,63. For the interactions of H-O-H, YSZ-H2, Ni-H2, Ni-H, Ni-YSZ, and O-O, we developed a new set of ReaxFF for the atomic, bonding, off-diagonal (used to describe BO and vdW pair interactions)46, and angle parameters. All these parameters are included in Supplementary Data 1. Quantum mechanical calculations In order to fit the ReaxFF parameters, QM calculations were performed using the first-principles DFT method implemented in the VASP code64,65. The exchange–correlation interactions were described by the Perdew–Burke–Ernzerhof functional. Spin-polarized calculations were applied throughout. To describe the electron–ion interactions, the projector augmented wave method was adopted. The cutoff energy of the plane wave basis set was 400 eV. The electronic tolerance for a self-consistent loop and the energy tolerance for the ionic relaxation were 1 × 10−6 and 1 × 10−5 eV/atom, respectively. The nudged elastic band calculation has been described elsewhere21. Fitting procedure To fit the ReaxFF potential, we utilized the algorithm described in POTFIT48. The target function for minimization is based on the deviation between the calculated ReaxFF values and DFT reference values with the following equation: $$Z(\alpha ) = w_{\mathrm{e}}{\kern 1pt} \mathop {\sum}\limits_{i = 1}^n {u_{\mathrm{k}}\left( {E_{{\mathrm{ReaxFF}},i}(\alpha ) - E_{{\mathrm{FP}},i}} \right)} {}^2 + w_{\mathrm{s}}Z_{\mathrm{s}} + Z_{\mathrm{F}}{\mathrm{,}}$$ where n is the number of configuration structures, EFP,i is the reference values (such as energies) from first-principles DFT calculation, EReaxFF,i(α) is the calculated ReaxFF values on the ith structure after parametrization by α; we and uk are the global and individual structural weights for the energy; and Zs, ZF, and ws indicate the stress, the force, and the global weights for the stress, respectively. In our fitting, wsZs was neglected. For Ni/YSZ/H2 systems, we have fitted 69 parameters to improve the H-O-H, YSZ-H2, Ni-H2, Ni-H, Ni-YSZ, and O-O interaction energies plus 24 energies for the HOR. MD and mechanics simulations First, models were built by using the lattice distances observed in the experiment (see Fig. 4). Then they were allowed to change due to the lattice expansion of Ni and YSZ in the course of isobaric–isothermal simulation at a temperature of 1073 K and a pressure of 1 atm. The main reaction dynamics was calculated under the canonical ensemble corresponding to the constant volume and temperature condition. The temperature and pressure were controlled with the Nose–Hoover thermostat66 and barostat67, respectively. The lattice parameters of two heterogeneous phases usually cannot perfectly match. Therefore, to obtain a well-matched Ni/YSZ interface, we expanded or compressed (for TPB-1 and TPB-2, respectively) the softer nickel crystal according to the misfit as calculated in Eq. (4): $${\mathrm{Misfit}} = \frac{{m \ast a_{{\mathrm{Ni}}} - n \ast a_{{\mathrm{YSZ}}}}}{{n \ast a_{{\mathrm{YSZ}}}}}$$ where n and m are the number of unit cells for YSZ and Ni, respectively, and a is the plane lattice parameter. The misfit was minimized down to 3%. We used periodic boundary conditions to simulate an infinite horizontal surface. Because the simulation box length along the z axis was long, the periodicity in the z direction was virtually removed. We ran the MD trajectory for 700 ps. To optimize the TPB structures, we employed the FIRE method68. All MD and energy minimization calculations were carried out with USER-REAXC package69 as implemented in the LAMMPS code49. The data that support the findings of this study are available within the paper and its Supplementary Information and Data. Other relevant data are available from the corresponding authors upon reasonable request. Wachsman, E. D. & Lee, K. T. Lowering the temperature of solid oxide fuel cells. Science 334, 935–939 (2011). Atkinson, A. et al. Advanced anodes for high-temperature fuel cells. Nat. Mater. 3, 17–27 (2004). Wilson, J. R. et al. Three-dimensional reconstruction of a solid-oxide fuel-cell anode. Nat. Mater. 5, 541–544 (2006). Iwai, H. et al. Quantification of SOFC anode microstructure based on dual beam FIB-SEM technique. J. Power Sources 195, 955–961 (2010). Cronin, J. S., Wilson, J. R. & Barnett, S. A. Impact of pore microstructure evolution on polarization resistance of Ni-Yttria-stabilized zirconia fuel cell anodes. J. Power Sources 196, 2640–2643 (2011). Viveta, N. et al. Effect of Ni content in SOFC Ni-YSZ cermets: A three-dimensional study by FIB-SEM tomography. J. Power Sources 196, 9989–9997 (2011). Kenjo, T., Osawa, S. & Fujikawa, K. High temperature air cathodes containing ion conductive oxides. J. Electrochem. Soc. 138, 349–355 (1991). Joshi, A. S., Grew, K. N., Peracchio, A. A. & Chiu, W. K. S. Lattice Boltzmann modeling of 2D gas transport in a solid oxide fuel cell anode. J. Power Sources 164, 631–638 (2007). Suzue, Y., Shikazono, N. & Kasagi, N. Micro modeling of solid oxide fuel cell anode based on stochastic reconstruction. J. Power Sources 184, 52–59 (2008). Chen, H.-Y. et al. Simulation of coarsening in three-phase solid oxide fuel cell anodes. J. Power Sources 196, 1333–1337 (2011). Irvine, J. T. S. et al. Evolution of the electrochemical interface in high-temperature fuel cells and electrolysers. Nat. Energy 1, 15014 (2016). Mizusaki, J. et al. Kinetic studies of the reaction at the nickel pattern electrode on YSZ in H2–H2O atmospheres. Solid State Ion. 70/71, 52–58 (1994). Bieberle, A., Meier, L. P. & Gauckler, L. J. The electrochemistry of Ni pattern anodes used as solid oxide fuel cell model electrodes. J. Electrochem. Soc. 148, A646–A656 (2001). Utz, A., Störmer, H., Leonide, A., Weber, A. & Ivers-Tiffée, E. Degradation and relaxation effects of Ni patterned anodes in H2-H2O atmosphere. J. Electrochem. Soc. 157, B920–B930 (2010). Yao, W. & Croiset, E. Stability and electrochemical performance of Ni/YSZ pattern anodes in H2/H2O atmosphere. Can. J. Chem. Eng. 93, 2157–2167 (2015). Bieberle, A. & Gauckler, L. J. State-space modeling of the anodic SOFC system Ni, H2–H2O|YSZ. Solid State Ion. 146, 23–41 (2002). Bessler, W. G., Gewies, S. & Vogler, M. A new framework for physically based modeling of solid oxide fuel cells. Electrochim. Acta 53, 1782–1800 (2007). Vogler, M., Bieberle-Hütter, A., Gauckler, L., Warnatz, J. & Bessler, W. G. Modelling study of surface reactions, diffusion, and spillover at a Ni/YSZ patterned anode. J. Electrochem. Soc. 156, B663–B672 (2009). Goodwin, D. G., Zhu, H. Y., Colclasure, A. M. & Kee, R. J. Modeling electrochemical oxidation of hydrogen on Ni–YSZ pattern anodes. J. Electrochem. Soc. 156, B1004–B1021 (2009). Grew, K. N. & Chiu, W. K. S. A review of modeling and simulation techniques across the length scales for the solid oxide fuel cell. J. Power Sources 199, 1–13 (2012). Liu, S. et al. Predictive microkinetic model for solid oxide fuel cell patterned anode: Based on extensive literature survey and exhaustive simulations. J. Phys. Chem. C 121, 19069–19079 (2017). Shishkin, M. & Ziegler, T. Hydrogen oxidation at the Ni/yttria-stabilized zirconia interface: a study based on density functional theory. J. Phys. Chem. C 114, 11209–11214 (2010). Cucinotta, C. S., Bernasconi, M. & Parrinello, M. Hydrogen oxidation reaction at the Ni/YSZ anode of solid oxide fuel cells from first principles. Phys. Rev. Lett. 107, 206103 (2011). Ammal, S. C. & Heyden, A. Combined DFT and microkinetic modeling study of hydrogen oxidation at the Ni/YSZ anode of solid oxide fuel cells. J. Phys. Chem. Lett. 3, 2767–2772 (2012). Fu, Z., Wang, M., Zuo, P., Yang, Z. & Wu, R. Importance of oxygen spillover for fuel oxidation on Ni/YSZ anodes in solid oxide fuel cells. Phys. Chem. Chem. Phys. 16, 8536–8540 (2014). Liu, S., Ishimoto, T., Monder, D. S. & Koyama, M. First-principles study of oxygen transfer and hydrogen oxidation processes at the Ni-YSZ-gas triple phase boundaries in a solid oxide fuel cell anode. J. Phys. Chem. C 119, 27603–27608 (2015). Merinov, B. V., Mueller, J. E., van Duin, A. C. T., An, Q. & Goddard, W. A. III. ReaxFF reactive force-field modeling of the triple-phase boundary in a solid oxide fuel cell. J. Phys. Chem. Lett. 5, 4039–4043 (2014). Helveg, S. et al. Atomic-scale imaging of carbon nanofibre growth. Nature 427, 426–429 (2004). Cargnello, M. et al. Control of metal nanocrystal size reveals metal-support interface role for ceria catalysts. Science 341, 771–773 (2013). Daio, T. et al. Lattice strain mapping of platinum nanoparticles on carbon and SnO2 supports. Sci. Rep. 5, 13126 (2015). Akita, T., Maeda, Y. & Kohyama, M. Low-temperature CO oxidation properties and TEM/STEM observation of Au/γ-Fe2O3 catalysts. J. Catal. 324, 127–132 (2015). Bassim, N. D. et al. Minimizing damage during FIB sample preparation of soft materials. J. Microsc. 245, 288–301 (2012). Dickey, E. C., Fan, X. & Pennycook, S. J. Direct atomic-scale imaging of ceramic interfaces. Acta Mater. 47, 4061–4068 (1999). Dickey, E. C. et al. Preferred crystallographic orientation relationships of nickel films deposited on (100) cubic-zirconia substrates. Thin Solid Films 372, 37–44 (2000). Beltrán, J. I., Gallego, S., Cerdá, J., Moya, J. S. & Muñoz, M. C. Bond formation at the Ni/ZrO2 interface. Phys. Rev. B 68, 075401 (2003). Nahor, H. & Kaplan, W. D. Structure of the equilibrated Ni(111)-YSZ(111) solid–solid interface. J. Am. Ceram. Soc. 99, 1064–1070 (2015). Liu, S.-S., Jiao, Z., Shikazono, N., Matsumura, S. & Koyama, M. Observation of the Ni/YSZ interface in a conventional SOFC. J. Electrochem. Soc. 162, F750–F754 (2015). Liu, S.-S., Takayama, A., Matsumura, S. & Koyama, M. Image contrast enhancement of Ni/YSZ anode during the slice-and-view process in FIB-SEM. J. Microsc. 261, 326–332 (2016). Mitome, M. Ultrathin specimen preparation by a low-energy Ar-ion milling method. Microscopy 62, 321–326 (2013). Nelson, G. J. et al. Three-dimensional microstructural changes in the Ni–YSZ solid oxide fuel cell anode during operation. Acta Mater. 60, 3491–3500 (2012). Dravid, V. P., Lyman, C. E., Notis, M. R. & Revcolevschi, A. High resolution transmission electron microscopy of interphase interfaces in NiO–ZrO2 (CaO). Ultramicroscopy 29, 60–70 (1989). Dickey, E. C. et al. Structure and bonding at Ni–ZrO2 (cubic) interfaces formed by the reduction of a NiO–ZrO2 (cubic) composite. Microsc. Microanal. 3, 443–450 (1997). Vellinga, W. P. & Hosson, J. T. M. D. Atomic structure and orientation relations of interfaces between Ag and ZnO. Acta Mater. 45, 933–950 (1997). Zhang, Z. et al. The peculiarity of the metal-ceramic interface. Sci. Rep. 5, 11460 (2015). Kato, N. I. Reducing focused ion beam damage to transmission electron microscopy samples. J. Electron Microsc. 53, 451–458 (2004). Mueller, J. E., van Duin, A. C. T. & Goddard, W. A. Development and validation of ReaxFF reactive force field for hydrocarbon chemistry catalyzed by nickel. J. Phys. Chem. C 114, 4939–4949 (2010). van Duin, A. C. T., Merinov, B. V., Jang, S. S. & Goddard, W. A. III ReaxFF reactive force field for solid oxide fuel cell systems with application to oxygen ion transport in yttria-stabilized zirconia. J. Phys. Chem. A 112, 3133–3140 (2008). Brommer, P. & Gähler, F. Potfit: effective potentials from ab-initio data. Model. Simul. Mater. Sci. Eng. 15, 295–304 (2007). Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117, 1–19 (1995). Koch, C. Determination of Core Structure Periodicity and Point Defect Density along Dislocations. PhD thesis, Arizona State Univ. (2002). Blaylock, D. W., Ogura, T., Green, W. H. & Beran, G. J. O. Computational investigation of thermochemistry and kinetics of steam methane reforming on Ni(111) under realistic conditions. J. Phys. Chem. C 113, 4898–4908 (2009). Weng, M. H. et al. Kinetics and mechanisms for the adsorption, dissociation, and diffusion of hydrogen in Ni and Ni/YSZ slabs: a DFT study. Langmuir 28, 5596–5605 (2012). Louthan, M. R. Jr., Donovan, J. A. & Caskey, G. R. Jr. Hydrogen diffusion and trapping in nickel. Acta Met. 23, 745–749 (1975). Wayman, M. L. & Weatherly, G. C. The H–Ni (hydrogen-nickel) system. J. Phase Equilib. 10, 569–580 (1989). Yu, J., Rosso, K. M. & Bruemmer, S. M. Charge and transport in NiO and aspects of Ni oxidation from first principles. J. Phys. Chem. C 116, 1948–1954 (2012). Hoang, T. M. C., Geerdink, B., Sturm, J. M., Lefferts, L. & Seshan, K. Steam reforming of acetic acid – a major component in the volatiles formed during gasification of humin. Appl. Catal. B Environ. 163, 74–82 (2015). Rossmeisl, J. & Bessler, W. G. Trends in catalytic activity for SOFC anode materials. Solid State Ion. 178, 1694–1700 (2008). Bessler, W. G. A new computational approach for SOFC impedance from detailed electrochemical reaction–diffusion models. Solid State Ion. 176, 997–1011 (2005). Iskandarov, A. M. & Tada, T. Dopant driven tuning of the hydrogen oxidation mechanism at the pore/nickel/zirconia triple phase boundary. Phys. Chem. Chem. Phys. 20, 12574–12588 (2018). Welander, M. M., Zachariasen, M. S., Hunt, C. D., Sofie, S. W. & Walker, R. A. Operando studies of redox resilience in ALT enhanced NiO-YSZ SOFC anodes. J. Electrochem. Soc. 165, F152–F157 (2018). Prakash, B. S., Kumar, S. S. & Aruna, S. T. Properties and development of Ni/YSZ as an anode material in solid oxide fuel cell: a review. Renew. Sust. Energy Rev. 36, 149–179 (2014). van Duin, A. C. T., Dasgupta, S., Lorant, F. & Goddard, W. A. III ReaxFF: a reactive force field for hydrocarbons. J. Phys. Chem. A 105, 9396–9409 (2001). Senftle, T. P. et al. The ReaxFF reactive force-field: development, applications and future directions. Npj Comput. Mater. 2, 15011 (2016). Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169 (1996). Kresse, G. & Furthmüller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci. 6, 15–50 (1996). Hoover, W. G. Canonical dynamics: equilibrium phase-space distributions. Phys. Rev. A 31, 1695 (1985). Hoover, W. G. Constant-pressure equations of motion. Phys. Rev. A 34, 2499 (1986). Bitzek, E., Koskinen, P., Gahler, F., Moseler, M. & Gumbsch, P. Structural relaxation made simple. Phys. Rev. Lett. 97, 170201 (2006). Aktulga, H. M., Fogarty, J. C., Pandit, S. A. & Grama, A. Y. Parallel reactive molecular dynamics: numerical methods and algorithmic techniques. Parallel Comput. 38, 245–259 (2012). This work was supported by JST-CREST (JPMJCR11C2). Activities of the INAMORI Frontier Research Center were supported by the KYOCERA Corporation. L.C.S. thanks Professor A.C.T. van Duin (Pennsylvania State University) and Dr. B.V. Merinov (California Institute of Technology) for sharing their ReaxFF parameters and fruitful discussion. L.C.S. also thanks Dr. S. Liu (Kyushu University) for providing the theoretical data. These authors contributed equally: Shu-Sheng Liu, Leton C. Saha. INAMORI Frontier Research Center, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan Shu-Sheng Liu, Leton C. Saha, Takayoshi Ishimoto, Syo Matsumura & Michihisa Koyama Institute of Fluid Science, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577, Japan Leton C. Saha Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo, 153-8505, Japan Albert Iskandarov & Yoshitaka Umeno Department of Applied Quantum Physics and Nuclear Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan Tomokazu Yamamoto & Syo Matsumura Global Research Center for Environment and Energy based on Nanomaterials Science, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki, 305-0044, Japan Michihisa Koyama Center for Energy and Environmental Science, Shinshu University, 4-17-1 Wakasato, Nagano, Nagano, 380-8553, Japan Shu-Sheng Liu Albert Iskandarov Takayoshi Ishimoto Tomokazu Yamamoto Yoshitaka Umeno Syo Matsumura S.-S.L. prepared the sample and carried out the microscope operation. L.C.S. performed the parameterization of ReaxFF, computational simulations, and modeling. A.I. (currently on leave of absence from the Institute for Metals Superplasticity Problems of RAS, Khalturin St. 39, Ufa, 450001, Russia) assisted the technical set-up of POTFIT, and T.I. aided the simulations. T.Y. assisted the TEM operation and QSTEM simulation. Y.U., S.M., and M.K. conceived and supervised the project. S.-S.L. and L.C.S. composed the manuscript and prepared the figures. All the authors discussed the results, commented, and revised the manuscript. Correspondence to Shu-Sheng Liu, Leton C. Saha or Michihisa Koyama. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Description of Additional Supplementary Files Supplementary Data 1 Liu, SS., Saha, L.C., Iskandarov, A. et al. Atomic structure observations and reaction dynamics simulations on triple phase boundaries in solid-oxide fuel cells. Commun Chem 2, 48 (2019). https://doi.org/10.1038/s42004-019-0148-x DOI: https://doi.org/10.1038/s42004-019-0148-x Energy storage and conversion Communications Chemistry (Commun Chem) ISSN 2399-3669 (online)
CommonCrawl
Biology Direct Altools: a user friendly NGS data analyser Salvatore Camiolo ORCID: orcid.org/0000-0002-8874-99931, Gaurav Sablok2 & Andrea Porceddu1 Biology Direct volume 11, Article number: 8 (2016) Cite this article Genotyping by re-sequencing has become a standard approach to estimate single nucleotide polymorphism (SNP) diversity, haplotype structure and the biodiversity and has been defined as an efficient approach to address geographical population genomics of several model species. To access core SNPs and insertion/deletion polymorphisms (indels), and to infer the phyletic patterns of speciation, most such approaches map short reads to the reference genome. Variant calling is important to establish patterns of genome-wide association studies (GWAS) for quantitative trait loci (QTLs), and to determine the population and haplotype structure based on SNPs, thus allowing content-dependent trait and evolutionary analysis. Several tools have been developed to investigate such polymorphisms as well as more complex genomic rearrangements such as copy number variations, presence/absence variations and large deletions. The programs available for this purpose have different strengths (e.g. accuracy, sensitivity and specificity) and weaknesses (e.g. low computation speed, complex installation procedure and absence of a user-friendly interface). Here we introduce Altools, a software package that is easy to install and use, which allows the precise detection of polymorphisms and structural variations. Altools uses the BWA/SAMtools/VarScan pipeline to call SNPs and indels, and the dnaCopy algorithm to achieve genome segmentation according to local coverage differences in order to identify copy number variations. It also uses insert size information from the alignment of paired-end reads and detects potential large deletions. A double mapping approach (BWA/BLASTn) identifies precise breakpoints while ensuring rapid elaboration. Finally, Altools implements several processes that yield deeper insight into the genes affected by the detected polymorphisms. Altools was used to analyse both simulated and real next-generation sequencing (NGS) data and performed satisfactorily in terms of positive predictive values, sensitivity, the identification of large deletion breakpoints and copy number detection. Altools is fast, reliable and easy to use for the mining of NGS data. The software package also attempts to link identified polymorphisms and structural variants to their biological functions thus providing more valuable information than similar tools. This article was reviewed by Prof. Lee and Prof. Raghava. Open peer review Reviewed by Prof. Lee and Prof. Raghava. For the full reviews, please go to the Reviewers' comments section. Genome-based polymorphic scans are the standard method to establish the degree of conservation and phylogenetic imprinting among the related plant taxa. Approaches based on re-sequencing have recently been exploited for the discovery of single nucleotide polymorphisms (SNPs) and insertion/deletion polymorphisms (indels) as a proxy for the phyletic patterns of evolution [1]. In addition to the creation of SNP maps, it is useful to identify SNPs associated with particular traits in order to localize quantitative trait loci (QTLs) suitable for molecular breeding programs [2]. In the last decade, the optimization of next-generation sequencing (NGS) chemistry and platforms has increased the throughput of sequencing while reducing costs. Although the generation of large amounts of sequence data is no longer a bottleneck in scientific investigations, the interpretation of the data remains challenging. Re-sequencing approaches produce millions of short reads 50–400 bp in length, although the latest technologies are likely to yield longer reads. When a target genome (TG) is re-sequenced, the alignment of such reads to a reference genome (RG) results in the detection of sequence variants such as SNPs and indels, and several alignment algorithms have been developed to detect them [3]. NGS platforms also generate sequencing errors, so other tools have been developed to reduce the number of false polymorphisms by introducing suitable statistical tests [4]. Although many aligners such as BWA [5] and Bowtie [6] incorporate algorithms that identify SNPs and indels quickly and accurately, they fail to detect large genomic deletions (hundreds to thousands of bases) possibly due to the segmental duplication of the genome and the retro-transposition of short and long interspersed elements (SINES and LINES) [7]. These types of polymorphisms are better highlighted by software that detects anomalous insert sizes in the alignment of paired-end reads, or by long-read sequencing approaches [8]. Alternatively, splitting each read into two portions can identify reads spanning the deleted segment (e.g. the deletion breakpoints) [9]. Tools such as Pindel [10], Breakdancer [11] and PEMer [12] rely on such strategies to identify large deletions, and must deal with the compromise between speed and the accuracy of breakpoint detection. Inferring the deletion coordinates from the distance between two mapped paired-end reads is inaccurate because the insert size is usually part of a distribution rather than a precise value. The identification of split-mapped reads is also an extremely time consuming and computationally demanding task. Resequencing data have also been used to detect large genomic rearrangements such as copy number variations (CNVs) and presence/absence variations (PAVs) [13]. CNVs reflect duplication or deletion events that change the copy number of specific genomic sequences when comparing target and reference genomes. Alignment coverage at each reference position will increase in a duplicated segment and decrease in a deleted segment, so the depth of coverage (DOC) is often used to identify CNVs [13]. PAVs are identified by detecting reference positions that are not covered by any target genome reads. Computational tools for sequence alignment and analysis are often difficult to install and use, particularly for non-specialist researchers with limited experience in the field of bioinformatics. Here we present Altools, a user-friendly software platform for the interpretation of resequencing data. The pipeline helps the user to achieve the alignment of sequenced reads against a reference genome, the discovery of SNPs/indels (at the genomic and transcript levels), CNVs, PAVs and large deletions through an intuitive graphical user interface (GUI). The algorithms included in Altools (Additional file 1: Figure S1) ensure the rapid and accurate analysis of sequence data and produce informative statistics that link the sequence data to biological functions [14]. Arabidopsis thaliana reference genome (Col0 ecotype) together with the corresponding gene annotation file was downloaded from the TAIR website (ftp://ftp.arabidopsis.org/home/tair/Genes/TAIR7_genome_release/). Gff2sequence [15] was used to generate FASTA formatted sequences of coding sequences (CDS) and untranslated regions (UTR). Resequencing data for the Tsu1 and Bur0 genotypes were downloaded from the SRA database (http://www.ncbi.nlm.nih.gov/sra/) (Additional file 2: Table S1). Genome simulation The R package RSVSim [16] was used with default parameters to generate A. thaliana simulated genomes that included deletions and duplications (maxDups = 10) of variable sizes (2000, 10,000 and 50,000 bp). For such rearranged genomes, dwgsim software (http://davetang.org/wiki/tiki-index.php?page=DWGSIM) was used to simulate Illumina paired-end 70-bp reads at different coverages (parameters: −C cov -c 0 -S 2 -e 0.0001-0.01 -E 0.0001-0.01, with cov equal to 4, 10, 20, 40 and 100). The same tool was used to generate simulated 70-bp paired end reads for the original A. thaliana genome with 40x coverage. Evaluation of polymorphism quality We applied the positive predictive value (PPV) and sensitivity tests to determine the robustness of SNPs and indels. The PPV is the portion of the total number of called polymorphisms that are correct [17]. Sensitivity indicates the ratio between the number of correctly called polymorphisms and the total number of genuine polymorphisms [17]. PPV and sensitivity were also used to evaluate the reliability of predicted large deletions and duplications. In this case, the number of positions included in the identified structural variants was divided by either the total number of bases in each structural variant (PPV) or by the total number of bases representing genuine structural variants (sensitivity). Read alignment: mapping raw reads against a reference genome The Read alignment tool allows the user to map a set of FASTQ-formatted reads to a reference genome using BWA [5] as the aligner, to sort and index the alignment file with SAMtools [18] and to call statistically significant polymorphisms with VarScan [19]. BWA was preferred over other aligners because it performs better than similar tools (e.g. Bowtie2) when analysing longer reads [20] (a scenario that will become more common for future sequencing technologies). Similarly, VarScan was chosen because of its high sensitivity [21] and better performance in lower-coverage sequencing runs [22]. Both tools have been implemented in Altools without modifications and therefore their performance has not changed. Altools will automatically recognize paired-end and single-end datasets and align them accordingly. Edit distance, number of threads (thus allowing for parallel computing) and any additional BWA flags can be specified by the user. When the alignment of reads is complete, a pileup-formatted file is generated by SAMtools [18] considering only those alignments that fulfil specific user-defined requirements ("minimum alignment quality", "minimum base quality" and "additional pileup parameters" in the GUI). More information can be found in the Altools manual provided with the software. Pileup analyser: providing faster access to the alignment data The Pileup analyser tool is used to generate a pileup folder containing files related to each chromosome in the reference genome. Only information about position, reference genome nucleotide, target genome nucleotide, coverage and presence/absence of SNPs and indels is reported in such files, with the aim of reducing disk space usage and data processing times during further analysis. Pileup analyser also offers several configurable filter settings relative to the minimum number of reads, the base quality, the minimum p-value and threshold allele frequency for calling SNPs and indels. A comprehensive summary statistics file is also produced, reporting the percentage of non-covered chromosomes, the frequency of SNPs and indels, specific coverage of bases G|C and A|T, and the frequency of bases involved in selected polymorphisms. Coverage analyser: detecting CNVs and PAVs The Coverage analyser tool is designed to investigate CNVs and PAVs based on the local depth of coverage. Anomalous coverage values may reflect the structure of the target genome (i.e. duplications may be present in the reference genome), so CNV detection requires that alignment data from both the target and reference genomes are compared. Coverage analyser initially calculates the average coverage for the reference genome (RGavCov) and target genome (TGavCov) while computing only informative positions (i.e. coverage >0). A series of adjacent windows is then generated along the chromosomes, and for the i th window an average coverage is calculated for both the reference genome (RGwindCov(i)) and the target genome (TGwindCov(i)) by computing the information reported in the relative pileup folders. Genomic portions that feature TGwindCov(i) = 0 but RGwindCov(i) >0 are immediately reported in the output as "zero coverage" regions, which highlight potential PAVs. Furthermore, for each i th window, the value ρ(i) is calculated as the ratio between the average coverage of the target and reference genomes in that window: $$ \rho (i) = \frac{T{G}_{WindCov(i)}}{R{G}_{WindCov(i)}} $$ The DNAcopy algorithm [23] is then used to split the DNA into segments featuring homogeneous values of ρ(i) (hereafter ρseg). For each segment j, this value is normalized in order to account for the average coverage of the two segments: $$ {\rho}_{seg Norm(j)} = {\rho}_{seg(j)}\ \frac{R{G}_{avCov}}{T{G}_{avCov}} $$ Moreover, for each segment, the average coverage of the target genome (TGsegCov(j)) and reference genome (RGsegCov(j)) are also calculated. Coverage analyser then reports losses and gains according to the following rationale: for the j th segment, the hypothetical copy number for both the reference and target genomes is calculated by dividing the segment average coverage by the overall average coverage: $$ T{G}_{segCopy(j)} = \frac{T{G}_{segCov(j)}}{T{G}_{avCov}}\kern3.25em R{G}_{segCopy(j)} = \frac{R{G}_{segCov(j)}}{R{G}_{avCov}} $$ If one or more copies of segment j have been lost from the target genome then the following relationship should be satisfied: $$ T{G}_{segCopy(j)}\ \le R{G}_{segCopy(j)}-1 $$ However, if one considers a diploid organism that loses a segment copy in only one of the homologous chromosomes, the following relationship is more accurate: $$ T{G}_{segCopy(j)}\ \le R{G}_{segCopy(j)}-0.5 $$ The above can be reformulated as: $$ {\rho}_{segNorm(j)}\ R{G}_{segCopy(j)}\ \le\ R{G}_{segCopy(j)}-0.5 $$ This leads to the conclusion that a segment can be defined as lost if the following relationship is satisfied: $$ {\rho}_{segNorm(j)\_ loss}\ \le\ 1-\frac{0.5}{R{G}_{segCopy(j)}} $$ Similarly, a gained segment is reported if the following relationship is satisfied: $$ {\rho}_{segNorm(j)\_ gain}\ \ge\ 1+\frac{0.5}{R{G}_{segCopy(j)}} $$ DNAcopy allows the merging of segments whose ρseg values are at least three standard deviations apart, therefore creating a smoothed dataset. Coverage analyser also performs the search for lost and gained segments on such datasets. Importantly, Coverage analyser not only returns the coverage ratio but also the individual calculated copy number for both the reference and target genomes. This feature provides a deeper insight into the meaning of the ratio value (e.g. a value of 2 may derive from a 2:1 or 4:2 ratio, among others). Sliding analysis: visualizing coverage and polymorphism data The Sliding analysis tool computes the average coverage together with the frequency of SNPs and indels within either adjacent or sliding windows along the chromosome. Both the raw data and the corresponding plots are generated, so this tool quickly highlights highly polymorphic regions or sites potentially containing CNVs. Large deletions finder: fast identification of deletions breakpoints Common aligners that use short reads are not suitable for the detection of long deletions. The Large deletions finder tool uses a folder containing SAM-formatted files that are produced following the alignment of paired-end reads to a reference genome. A deletion is called when the mapping distance between two mate-reads is higher than a user-defined threshold. Overlapping deletions can be merged if the distance between the first mate for both sets of paired ends does not exceed a user-defined number of nucleotides. Altools returns the approximate coordinates of the deletion boundaries at this stage (Additional file 3: Figure S2A). An additional alignment step is performed using BLASTn to precisely identify the deletion breakpoints. Two ranges are defined that are 2000 nucleotides wide and centred on the approximate start and end positions, respectively (Additional file 3: Figure S2B). All read pairs for which at least one mate is mapped within such ranges are extracted from the SAM-formatted alignment file and mapped onto the reference genome by BLASTn alignment. Reads that did not map onto the reference genome originally, possibly due to a broken alignment, will produce hits that can be used to infer the real deletion boundaries (Additional file 3: Figure S2C). Coverage analyser carries out an additional test to highlight potential false positive deletions reflecting intrachromosomal duplication events. The first 200 nucleotides beyond the upstream deletion breakpoint are extracted from the reference genome and used again as a BLASTn query to search for additional alignments. In the output file, further fields are reported for each deletion indicating the position of these secondary alignments, their percentage of identity and alignment coverage. We define deletions that feature such supplementary fields such as ambiguous, as explained in more detail in the Altools manual (Additional file 4: Figure S3). Finally, the coverage of the deleted regions is reported in order to speculate whether the detected structural variation is homozygous or heterozygous, and to test for the presence of the deleted regions at other positions within the target genome. Polymorphism analyser: linking variants to biological functions When SNPs and indels have been identified using the BWA/SAMtools/VarScan pipeline, the Polymorphism analyser tool can be used to highlight those nucleotide variations that affect the genic portions, i.e. coding sequences (CDS) and untranslated regions (UTR). This tool requires the pileup folder, an additional folder containing FASTA-formatted CDS and UTR sequences, and the gff3-formatted gene annotation file. Polymorphism analyser returns a table that reports information such as: (a) the genic portion of the sequence (CDS, 3UTR and/or 5UTR), (b) the gene name (c) the relative position of the polymorphism, (d) the nucleotides called in the reference genome and in the aligned reads, (e) the zygosity of the mutation, (f) amino acid substitutions due to non-synonymous SNPs, including mutations generating a premature stop codon, and (g) any frameshift caused by indels within the CDS. Alignment comparison The 1:1 Alignment tool compares the pileup folders of two different alignments on the same reference genome and reports the common and unique polymorphisms. Gene extractor The Large deletion finder and Coverage analyser tools feature an option to generate a GE file that can be analysed in more detail using the Gene Extractor tool. The latter also requires a gff3-formatted annotation file and returns a list of genes that are partially (marked with the flag 0) or totally (marked with the flag 1) included within a selected structural variation. SNP/indel identification in simulated genomes The A. thaliana genome (TAIR7) was used as a scaffold to generated five sets of paired-end Illumina reads with 4x, 10x, 20x, 40x and 100x coverage, respectively. For each coverage dataset, reads were aligned to the original reference genome using the Reads alignment tool with default parameters. The Pileup analyser tools was then used (see Additional file 5: Table S2 for settings) to detect the simulated polymorphisms. Although the PPVs were >0.99 for each of the analysed datasets, sensitivity increased to a plateau at 20x coverage for both SNPs and indels (Table 1). Moreover, whereas the SNP calling sensitivity reached a maximum value of 0.98, indel identification was poor with a maximum value of 0.81 at 40x coverage. Table 1 Performance of the Altools platform (detection of polymorphisms). Statistical analysis of Altools polymorphism calling was carried out at five simulated coverage levels Structural variation identification in simulated genomes Fifty deletions of 2000 bp were introduced into the A. thaliana genome and the resulting simulated sequence was used to generate five sets of paired-end Illumina reads with 4x, 10x, 20x, 40x and 100x coverage, respectively. The same test was then repeated by simulating 10,000 and 50,000 bp deletions. The Large deletions finder tool was used to localize the simulated deletions in each dataset. The PPV and sensitivity were >0.97 for all the datasets and in many cases they reached their maximum value (Figs. 1 and Additional file 6: Figure S4). Furthermore, we computed the distribution of the differences between the observed and simulated breakpoints. The median was 0 at all parameters for coverage and deletion size, with differences of a few nucleotides between the 10th and 90th distribution quartiles (Fig. 1 and Additional file 6: Figure S4). The Large deletions finder tool was compared to the widely-used Pindel software [10] and the former showed superior performance in terms of execution time and, in most cases, also PPV and sensitivity (Additional file 7: Table S3). Performance of the Large deletion finder tool (detection of large deletion breakpoints). Distribution of the differences between detected and expected breakpoint positions called by the Large deletion finder tool together with the corresponding PPV and sensitivity. The plots represent the results on simulated read datasets with 10x coverage and three large deletion sizes (2000, 10,000 and 50,000 bp) We also simulated 50 duplications of 2000 bp in the same reference genome and generated five sets of paired-end Illumina reads with 4x, 10x, 20x, 40x and 100x coverage, respectively. The approach described above was used to investigate duplications of 10,000 and 50,000 bp. In each of the simulated datasets, the maximum number of duplications was 10. Coverage analyser was used to localize the duplicated regions and determine the number of copies based on a reference genome pileup folder derived from the alignment and pileup of A. thaliana simulated reads. A 50-bp window was used and only losses/gains larger than 500 bp were sent to the output file. The software achieved the best performance when only large duplications were present, resulting in the highest PPVs (0.97–1) and sensitivities (0.99–1) as shown in Figs. 2 and Additional file 8: Figure S5. However, the sensitivity declined to ~0.95 for the duplications of 2000 and 10000 bp, although the PPV was poor only for the 4x simulated dataset (PPV2000bp = 0.21, PPV10000bp = 0.65) as shown in Additional file 8: Figure S5. The copy number was also predicted precisely, with the slope between the detected and expected copy numbers always higher than 0.9 (Figs. 2 and Additional file 8: Figure S5). The comparison of this module with other software for the detection of CNVs, e.g. CNVseq [24], confirmed its excellent performance in terms of execution times, PPV and sensitivity (Additional file 7: Table S3). Performance of the Coverage analyser tool (detection of copy number variation). Scatterplot showing differences between detected and expected copy numbers called by the Coverage analyser tool together with the corresponding values of PPV and sensitivity. The plots represent the results on simulated read datasets with 10x coverage and three duplication sizes (2000, 10,000 and 50,000 bp) Analysis of A. thaliana resequencing data using Altools Altools was used to analyse the real resequencing data of two A. thaliana accessions (Bur0 and Tsu1) for the robust detection of polymorphisms and to estimate the scalability of the approach. The Pileup analyser tool identified several key features, such as: (a) a higher coverage of G|C compared to A|T bases (Additional file 9: Table S4), which is a known bias for some Illumina sequencing platforms [25]; (b) a higher frequency of polymorphisms in chromosome 4 (Additional file 10: Figure S6); and (c) maintenance of the genomic structure despite the SNP and indel events (Additional file 11: Figure S7). The Polymorphism analyser tool highlighted the presence of 133,129 SNPs and 5343 indels within the CDS and UTRs of Bur0 transcripts. Interestingly, 94 % of the SNPs we identified were homozygous, compared to only 61.2 % of the indels (Table 2). The higher degree of SNP homozygosity reflects the status of A. thaliana as an autogamous plant species, whereas the different zygosity ratio in the context of indels suggests they are less likely to become fixed due to their potential deleterious effects, e.g. frameshifts in CDS or regulatory disruption in the UTRs. SNPs in the CDS resulted in 49,369 amino acid substitutions, 573 premature stop codons and the loss of the stop codon in at least one allele of 114 genes (Table 2). A similar picture emerged when the Tsu1 resequencing data were analysed, although the SNP frequency proved to be more homogenous when comparing the CDS and UTRs in this accession (~0.29 %). Table 2 Polymorphisms found in the genomes and transcripts of A. thaliana accessions Bur0 and Tsu1 The 1:1 Alignment tool was used to compare Bur0 and Tsu1 polymorphisms, revealing that nearly 30 % of the polymorphisms were common to both accessions (Additional file 12: Figure S8). The Coverage analyser tool was used to investigate loss and gain events in Bur0 by comparing its resequencing data to the A. thaliana simulated data (accession Col0) as previously described (window size = 50, minimum number of windows to merge = 4, minimum structural variant size = 1000 bp). Nearly 4.4 million bp were shown to be lost from the Bur0 genome, whereas 3.4 million bp were gained (Table 3). Gene Extractor was used to investigate whether such structural variations could include annotated genes. Although the identified structural variants comprised more than 6 % of the A. thaliana genome, only a few hundred genes were totally included in the corresponding regions (Table 3). A gene ontology (GO) singular enrichment analysis (SEA) using the web-based server Agrigo (http://bioinfo.cau.edu.cn/agriGO/analysis.php) revealed that the gained genes were mostly involved in the respiration pathway (Additional file 13: Table S5) whereas the missing genes (lost and zero coverage) were enriched in stress-response functions (Additional file 14: Table S6). Table 3 Coverage analyser results for A. thaliana accession Bur0. Total number of bases detected as gains, losses and zero coverage areas together with the number of annotated genes found in these areas In this paper we present Altools, a new software pipeline for the analysis and interpretation of NGS data. Altools features a GUI-enabled workflow for variant calling that guides the user through all steps, beginning with reference-assisted alignment and ending with the functional annotation of identified variants. Altools relies on a Java-built GUI that provides a user-friendly bioinformatics environment together with several algorithms developed in C++ that maximize the computational performance. Although many software platforms have been developed to handle NGS data analysis, Altools offers a unique set of advantageous features. The BWA/SAMtools/VarScan pipeline is used for the alignment and identification of SNPs and indels, and to the best of our knowledge this is the first time these components have been embedded a single software platform and the overall performance has been verified. We found that the proposed strategy achieved satisfactory results in terms of PPV and sensitivity, although the best performance was achieved at coverages of 10x or more (Table 1). The performance and scalability of the workflow was equivalent to or in some cases even better than other available tools [17]. The sensitivity detection was better for SNPs than indels (Table 1). This may reflect the low edit distance used in the alignment step (BWA flag –n = 4) which can reduce the probability of alignment for reads featuring longer insertions or deletions. A new algorithm was developed for the identification of large deletions. This takes into account paired-end reads mapping on the same chromosome but at a distance that is incompatible with the expected insert size, and this can determine the approximate coordinates of large deletions. The BLAST algorithm is then used to accurately detect the deletion breakpoints by using the broken alignment of reads spanning the identified deletions. Two additional features make the Large deletion finder tool superior to similar tools. First, coverage of the deleted segment is also calculated in the reference genome. This can provide a deeper insight on the typology of the lost DNA portion, i.e. the presence of aligned reads within deletions may reflect either a heterozygous structural variation or the presence of a paralogous region elsewhere in the genome. Second, the Large deletion finder tool also tests whether the deletion flanking regions are duplicated in additional positions of the chromosome. This feature, together with the number of reads supporting the structural variation, allowed us to exclude potential false positive deletions and achieve good performance in terms of PPV, sensitivity and precision of breakpoint detection for all the simulated datasets we analysed (Figs. 1 and Additional file 6: Figure S4). The Coverage analyser tool achieved satisfactory PPV and sensitivity values together with a precise calculation of the copy number in most of the simulated datasets (Figs. 2 and Additional file 8: Figure S5). The performance was poorer when we analysed datasets featuring lower coverage and smaller duplicated segments because the method is sensitive to random coverage fluctuations that are more easily averaged in longer segments. One of the main advantages of Altools is its ability to link SNPs, indels, CNVs, PAVs and large structural variations with biological outcomes. The benefit of this approach emerged from the analysis of two A. thaliana accessions, Bur0 and Tsu1. First, Pileup analyser produced statistics that were used for the assessment of the sequencing quality (e.g. G|C vs A|T coverage) while revealing that small polymorphisms (SNPs and indels) preserve the general AT-rich nucleotide composition profile (Additional file 11: Figure S7). Because this tool considers single chromosome datasets, chromosome 4 was identified as the most polymorphic in both accessions (Additional file 10: Figure S6). The Coverage analyser tool allowed the identification of CNVs and PAVs in the Bur0 accession and revealed that almost 6 % of the reference genome is involved in such structural variations. Nevertheless, the Gene extractor tool showed that only a few hundred annotated genes were included completely within the detected CNVs and PAVs as expected, and that most structural variations were intergenic (or non-annotated) sequences. Interestingly, GO enrichment revealed ontologies associated with the respiration pathway (Additional file 13: Table S5) which corresponds to the ability of Bur0 shoots to produce larger amounts of several sugars compared to the Col0 accession under specific conditions [26]. The analysis of CNVs and PAVs also showed that many of the genes that have been lost from the Bur0 accession are related to stress-response functions (Additional file 14: Table S6) matching the more stress-sensitive characteristics of Bur0 compared to Col0 [27]. The Polymorphism analyser tool allowed the identification of genes in which SNPs or indels caused gene loss, premature truncation or amino acid substitutions. A simple evaluation of polymorphism frequencies within transcripts showed how SNPs are more likely than indels to become fixed in the CDS, with indels featuring much less frequently in the CDS compared to the UTRs. This hypothesis was confirmed by the higher percentage of heterozygous indels, contrasting with the autogamy of A. thaliana (Table 2). Finally, polymorphisms in the Bur0 and Tsu1 accessions were compared to find common and unique SNPs and indels, an additional Altools feature that could be used to investigate phylogenetic relationships, develop a DNA barcoding system or conduct genome wide association studies. Advances in the NGS technologies in the last years have led to the development of streamlined workflows for the analysis and interpretation of NGS data. In this context, Altools offers a unique combination of features including an intuitive GUI, a straightforward installation procedure and user-friendly menus suitable for researchers with only basic informatics skills. The new algorithm for the identification of several types of structural variations was fast, accurate and sensitive, equalling or exceeding the performance of contemporary software platforms. Finally, the Altools pipeline is not solely based on the comparative analysis of sequencing data but also the biological interpretation of complex datasets. Availability and requirements Project name: Altools Project home page: http://sourceforge.net/projects/altools/ Operating system: Linux 64bit Programming language: Java, C++, R Other requirements: xterm, R package DNAcopy, Java version 1.8.0_45 or later. License: GNU GPL Any restriction to use by non-academics: no restriction applied Reviewer's comments Reviewer's report 2: Prof. Sanghyuk Lee Reviewer recommendations to authors: Following points needs to be addressed for improving the quality of the work. 1. Most of pipelines lack an objective comparison with other tools publicly available. For example, they implemented BWA/samtools/Varscan for identifying SNPs and indels and it showed satisfactory performance in terms of PPV and sensitivity in their simulation study. However, its performance should be compared with other programs such as GATK utilities, PINDEL, Scalpel. CNVs are identified with their own in-house developed algorithm. Again, its performance should be compared with other tools for similar purposes (e.g. XHMM, ExomeDepth, Conifer, CONTRA, and exomeCopy). Without such comparison, it is difficult to judge whether Altools' result are superior to those tools and nobody would use the tool. 2. The pipeline is tightly designed with very limited flexibility. Better approach would be to allow users to choose proper tools and processes like the GALAXY workflow engine. New and better tools are constantly released and users should be able to choose such updated tools if necessary. I believe that there exist better tools than Varscan in variant calling. Furthermore, the hard-wired pipeline of Altools is difficult to modify. For example, it is usually recommended to incorporate adaptor trimming, duplicate removal, and alignment recalibration for pre-processing of the NGS data in analyzing well-established model organisms. 3. The packing of tools needs significant improvement. I do not feel that the tool is really user-friendly with poor flexibility, no utility tools for log or process management, and no unique visualization support. Minor issues: English editing is strongly recommended. Authors' response to reviewer 2: We would like to thank Professor Lee for his valuable suggestions. Please find hereafter a point by point response to the raised concerns. Major revisions. We ran a benchmark test on Altools by comparing its performance with that of CNVseq for the detection of CNVs and Pindel for the detection of large deletions. The results (Additional file 7: Table S3) show that our software performed better in terms of execution time and, in general, in terms of PPV and sensitivity. The choice of the BWA aligner and VarScan polymorphism caller is now better explained in the text. We also appreciated the suggestion to improve the GUI by including a utility for log or process management, a visualization tool and a wider collection of aligners, polymorphism callers and read pre-processing tools and we intend to consider these suggestions for future Altools updates. For the time being, we believe that relying on widely-used file formats such as SAM, BAM and SAMtools pileup will already deliver a certain degree of flexibility to the Altools environment. For example, users can apply their favourite tools to generate compatible files and can still submit their data to the Altools structural variation detection algorithm. Minor issues. A professional scientific editing service has carried out a thorough revision of the manuscript. Reviewer 2's comments to the revised manuscript: As suggested in the previous review, authors compared the performance of Altools with CNVseq for CNVs and Pindel for large indels, and report better PPV and sensitivity. However, I think that the comparison target programs were not properly chosen. Both CNVseq and Pindel were published in 2009 and I believe that many other programs have been published for the same purpose. Furthermore, the issue of limited flexibility was not resolved yet. Even though Altools can be combined with various file formats in principle, experts with such capability would not use a pipeline tool not supporting recent advanced algorithms. Authors' response: We would like to thank Professor Lee for his comments. Although we are aware of the most recent algorithms for the identification of polymorphisms and structural variations, we decided to benchmark Altools against Pindel and CNVseq because these software platforms are widely used, their quality is well established, and comparative tests against similar tools have been published in the recent literature (e.g. J. Zhang et al., 2014, Horticulture Research 1:14045; D. H. Ghoneim, 2014, BMC Research Notes 7:864, J. Duan, 2013, PlosOne 8:e59128). Indeed Professor Lee suggested Pindel as one of the platforms we should use for comparison. Finally, as indicated in our previous response, we are already working to improve the flexibility of Altools and compatibility with more recent algorithms will be introduced in a forthcoming update. Reviewer's report 3: Prof. Gajendra Raghava In this manuscript, a pipeline developed for analyzing NGS data has been described. This is important pipeline for researchers working in the filed of genomics. In the present form this manuscript is not publishable as authors have not justified their claims. In addition selection of tools integrated in this manuscript need to be justified. Major comments 1. In past number of pipelines have been developed on NGS, author should show comparison of Altools with existing tools. 2. Authors claim that their pipeline is fast (fast in terms of what?)). In order to justify their claim they should benchmark their method in term of execution time used to process NGS data. 3. In addition, authors should show superiority of individual tools integrated in their pipeline over existing tools. This is important to show application of this pipline. 4) Altools pipeline contains eight major modules or components, author should list indigenous and third party software separately. Graphical flowchart of Altools would be useful for readers to understand components of the pipeline. 1) This manuscript need to be revised thoroughly as it contain several grammatical and typographical mistakes. (e.g. genome wise association (GWAS) studies should be genome-wide association studies (GWAS). This pipeline has been mentioned Altools and ALtools in manuscript, it should be uniform 2) Additional file 11: Figure S7 is mentioned at page 14 (Line 41), which is otherwise missing. 3) In Table 2, what is the meaning of values having comma in between, e.g. 0,003? 4) In Table 1; they show total called and true called and false called SNPs. What about missed SNPs, which were generated by dgwsim software, but not called at all by Altools? 5) owtie was not used while it can take care of splice variants? Preference for BWA over Bowtie should be mentioned somewhere. 6) There is need to generate comprehensive manual for Altools Author's response to reviewer 3: We would like to thank Prof. Raghava for his exhaustive review. Please find hereafter a point by point response to the raised concerns Altools was benchmarked against two published software platforms for the determination of copy number variations (CNVs) and large deletions. The results (Additional file 7: Table S3) show that our software performed better in terms of execution time and, in general, in terms of PPV and sensitivity. The execution speed is now reported and compared to similar software platforms (Additional file 7: Table S3). The choice of the different software modules is now better explained in the text. A flowchart illustrating the original and third-party software within Altools has been added to the revised version of the manuscript. Minor issues A professional scientific editing service has carried out a thorough revision of the manuscript. This included the careful standardization and correction of all software names, the checking of abbreviations and initialisms for accuracy, grammatical corrections and style revision. The missing figure has now been added. "," has been replaced by "." as decimal separator in all the tables. The sensitivity values were calculated as "the fraction of simulated variants which were called from the sequence data" (ref 17) and is intended to address the concern raised by the reviewer. The preference for BWA over Bowtie2 as the aligner is now addressed in the revised manuscript A comprehensive manual for Altools is included in the software folder. CNV: copy number variation GUI: genome-wide association study PAV: presence/absence variation SNP: Helyar SJ, Hemmer-Hansen J, Bekkevold D, Taylor MI, Ogden R, Limborg MT, et al. Application of SNPs for population genetics of nonmodel organisms: new opportunities and challenges. Mol Ecol Resour. 2011;11 Suppl 1:123–36. Eathington SR, Crosbie TM, Edwards MD, Reiter RS, Bull JK. Molecular Markers in a Commercial Breeding Program. Crop Sci. 2007;47:S–154. Li H, Homer N. A survey of sequence alignment algorithms for next-generation sequencing. Brief Bioinform. 2010;11:473–83. Pirooznia M, Kramer M, Parla J, Goes FS, Potash JB, McCombie WR, et al. Validation and assessment of variant calling pipelines for next-generation sequencing. Hum Genomics. 2014;8:14. Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25:1754–60. Langmead B, Trapnell C, Pop M, Salzberg SL. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009;10:R25. Kazazian HH. Mobile elements: drivers of genome evolution. Science. 2004;303:1626–32. Tuzun E, Sharp AJ, Bailey JA, Kaul R, Morrison VA, Pertz LM, et al. Fine-scale structural variation of the human genome. Nat Genet. 2005;37:727–32. Mills RE, Luttig CT, Larkins CE, Beauchamp A, Tsui C, Pittard WS, et al. An initial map of insertion and deletion (INDEL) variation in the human genome. Genome Res. 2006;16:1182–90. Ye K, Schulz MH, Long Q, Apweiler R, Ning Z. Pindel: a pattern growth approach to detect break points of large deletions and medium sized insertions from paired-end short reads. Bioinformatics. 2009;25:2865–71. Fan X, Abbott TE, Larson D, Chen K. BreakDancer - Identification of Genomic Structural Variation from Paired-End Read Mapping. Curr Protoc Bioinformatics. 2014;2014. Korbel JO, Abyzov A, Mu XJ, Carriero N, Cayting P, Zhang Z, et al. PEMer: a computational framework with simulation-based error models for inferring genomic structural variants from massive paired-end sequencing data. Genome Biol. 2009;10:R23. Medvedev P, Stanciu M, Brudno M. Computational methods for discovering structural variation with next-generation sequencing. Nat Methods. 2009;6(11 Suppl):S13–20. Aßmus J, Schmitt AO, Bortfeldt RH, Brockmann GA. NovelSNPer: A Fast Tool for the Identification and Characterization of Novel SNPs and InDels. Adv Bioinformatics. 2011;2011:1–11. Camiolo S, Porceddu A. gff2sequence, a new user friendly tool for the generation of genomic sequences. BioData Min. 2013;6:15. Bartenhagen C, Dugas M. RSVSim: an R/Bioconductor package for the simulation of structural variations. Bioinformatics. 2013;29:1679–81. Liu X, Han S, Wang Z, Gelernter J, Yang B-Z. Variant callers for next-generation sequencing data: a comparison study. PLoS One. 2013;8:e75619. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009;25:2078–9. Koboldt DC, Chen K, Wylie T, Larson DE, McLellan MD, Mardis ER, et al. VarScan: variant detection in massively parallel sequencing of individual and pooled samples. Bioinformatics. 2009;25:2283–5. Hatem A, Bozdağ D, Toland AE, Çatalyürek ÜV. Benchmarking short sequence mapping tools. BMC Bioinformatics. 2013;14:184. Xu H, DiCarlo J, Satya RV, Peng Q, Wang Y. Comparison of somatic mutation calling methods in amplicon and whole exome sequence data. BMC Genomics. 2014;15:244. Pightling AW, Petronella N, Pagotto F. Choice of reference-guided sequence assembler and SNP caller for analysis of Listeria monocytogenes short-read sequence data greatly influences rates of error. BMC Res Notes. 2015;8:748. Bioconductor - DNAcopy [http://www.bioconductor.org/packages/release/bioc/html/DNAcopy.html] Xie C, Tammi MT. CNV-seq, a new method to detect copy number variation using high-throughput sequencing. BMC Bioinformatics. 2009;10:80. Dohm JC, Lottaz C, Borodina T, Himmelbauer H. Substantial biases in ultra-short read data sets from high-throughput DNA sequencing. Nucleic Acids Res. 2008;36:e105. Ramel F, Sulmon C, Gouesbet G, Couée I. Natural variation reveals relationships between pre-stress carbohydrate nutritional status and subsequent responses to xenobiotic and oxidative stress in Arabidopsis thaliana. Ann Bot. 2009;104:1323–37. Peele HM, Guan N, Fogelqvist J, Dixelius C. Loss and retention of resistance genes in five species of the Brassicaceae family. BMC Plant Biol. 2014;14:298. This project originated from SC's MSc thesis in Digital Biology at the University of Manchester. For this reason, the lead author would like to thank Prof. Andy Brass and Dr. Heather Vincent for guidance and advice. Moreover, we would like to thank Dr. Francesco Vezzi, Prof. Michele Morgante and Dr. Walter Sanseverino for their help and suggestions. Università degli studi di Sassari, Dipartimento di Agraria, SACEG, Via Enrico De Nicola 1, Sassari, 07100, Italy Salvatore Camiolo & Andrea Porceddu Plant Functional Biology and Climate Change Cluster (C3), University of Technology Sydney, PO Box 123 Broadway, NSW 2007, Sydney, Australia Gaurav Sablok Salvatore Camiolo Andrea Porceddu Correspondence to Salvatore Camiolo. SC designed/produced the software and contributed to the manuscript drafting. GS tested the software and provided suggestions for some of the implemented algorithms. AP contributed to the strategy underlying the software and helped to write the manuscript. All authors read and approved the final manuscript. Flowchart describing the eight Altools modules. Blue portions represent novel algorithms, whereas red portions represent third-party embedded software. (DOC 21 kb) Sequence read archive (SRA) experiments for A. thaliana accessions Bur0 and Tsu1 available at http://www.ncbi.nlm.nih.gov/sra. (DOC 209 kb) Pipeline for the identification of deletion breakpoints. (a) Approximate deletion boundaries are inferred by detecting mapped paired-end reads that align at a distance that is not compatible with the expected insert. Overlapping sets of improperly-mapped mates (e.g. possibly underlining the same deletion) are merged at this stage. (b) A 2000-bp range is selected in the reference genome at each of the found deletion boundaries (deletion start ± 1000 bp and deletion end ± 1000 bp). Reads that are mapped within these regions are extracted from the alignment file together with the corresponding unmapped mates. (c) BLASTn is used to map reads identified at point (b) onto the reference genome and deletion breakpoints are inferred by the position of the detected partial alignments. (DOC 21 kb) Possible duplication interference affecting the correct identification of a large deletion. In a real deletion, reads mapping to the genomic portion A have their mates mapped to portion B at a distance that is not compatible with their library insert size. However, if a deletion did not occur between A and B, but rather B is duplicated somewhere upstream within the same chromosome, then reads mapping to A may have their mates mapped either in B or in Bdup. Mate pairs aligning in the portions A–Bdup will feature a mapping distance that is not compatible with their insert and, in this case, a deletion may be erroneously called. (DOC 21 kb) Pileup analyser parameters to detect the simulated polymorphisms in the A. thaliana genome with different reference coverage values. (DOC 207 kb) Distribution of the differences (PPV and sensitivity) between detected and expected breakpoint positions derived from Large deletion finder analysis of the simulated reads dataset (coverage 4x, 20x, 40x and 100x) with three large deletion sizes (2000, 10000 and 50000 bp). (DOC 21 kb) Scatterplot showing differences (PPV and sensitivity) between detected and expected copy numbers calculated by the Coverage analyser tool on simulated reads datasets (coverage 4x, 20x, 40x and 100x) and three duplications sizes (2000, 10000 and 50000 bp). (DOC 21 kb) Benchmark of Altools for the detection of copy number variations (CNVs) and large deletions. The Coverage analyser module was compared to CNVseq [23] by testing its performance on the simulated A. thaliana genome with 10x coverage and three CNV segment sizes (2000, 10,000 and 50,000 bp). Default parameters were used in CNVseq except the window size (−−window-size 50) for the sake of uniformity with the Altools settings. The Large deletions finder module was compared to Pindel [10] by testing its performance on the simulated A. thaliana genome with 10x coverage and three deleted segment sizes (2000, 10,000 and 50,000 bp). To compare the software platforms under equivalent conditions, Pindel was set to output only deletions (−r false -t false -l false) while setting all the remaining parameters to their default values (for the detection of 50,000-bp deletions the flag –x 6 was added). Benchmarking was carried out on a server equipped with an Intel(R) Xeon(R) CPU X5660 working at 2.80 GHz. (DOC 21 kb) G|C bias in the Bur0 and Tsu1 Illumina NGS datasets. (DOC 21 kb) Additional file 10: Figure S6. Frequency of (A) SNPs and (B) indels in the alignment of Bur0 and Tsu1 sequences on the A. thaliana reference genome. (DOC 21 kb) (Top) Frequency of the four nucleotides in the reference and target genomes at a polymorphic site. (Bottom) Frequency of the four nucleotides among the inserted and deleted bases. (TIFF 142 kb) Comparison of polymorphisms (SNPs and indels) found in the A. thaliana accessions Bur0 and Tsu1. (TIFF 68 kb) Additional file 13: Table S5. Gene Ontology enrichment analysis of the Bur0 accession transcripts that are enclosed in gained regions (P = process and F = function). (DOC 21 kb) Gene Ontology enrichment analysis of the Bur0 accession transcripts that are enclosed in lost regions, including copy number variation and zero coverage reference genome portions (P = process, F = function and C = cellular component). (DOC 215 kb) Camiolo, S., Sablok, G. & Porceddu, A. Altools: a user friendly NGS data analyser. Biol Direct 11, 8 (2016). https://doi.org/10.1186/s13062-016-0110-0 Indels Large deletions Re-sequencing
CommonCrawl
Sven Kosub Universität Konstanz | Uni-Konstanz · Department of Computer and Information Science Prof. Dr. rer. nat. habil., Dipl.-Math. I am an Adjunct Professor at the Department of Computer and Information Science leading a Theory of Computing group since 2015. After studies in mathematics and computer science, I received a PhD for research in computational complexity theory. Finishing a Habilitation in computer science, I joined the University of Konstanz in 2008 for a lectureship in formal foundations of computer science. My scientific expertise lies in the field of theoretical computer science, i.e., algorithms and complexity, discrete mathematics and logic, and specific interests of mine are the theoretical foundations of artificial intelligence, data science, and social network analysis. I am slightly biased towards examples from the sports domain. Finding a Periodic Attractor of a Boolean Network Tatsuya Akutsu Avraham A. Melkman Takeyuki Tamura In this paper, we study the problem of finding a periodic attractor of a Boolean network (BN), which arises in computational systems biology and is known to be NP-hard. Since a general case is quite hard to solve, we consider special but biologically important subclasses of BNs. For finding an attractor of period 2 of a BN consisting of $n$ OR func... Biases in the Football Betting Market Imant Daunhawer David Schoch This research uses football online betting odds of a broad variety of matches and bookmakers to identify known biases in odds pricing, namely the favorite-longshot bias and the away-favorite bias. Furthermore, it tries to answer the question whether a naive strategy of betting against these biases can be profitable. Our findings are consistent with... How to Make Sense of Team Sport Data: From Acquisition to Data Modeling and Research Aspects Manuel Stein Halldor Janetzko Daniel Seebacher Michael Grossniklaus Automatic and interactive data analysis is instrumental in making use of increasing amounts of complex data. Owing to novel sensor modalities, analysis of data generated in professional team sport leagues such as soccer, baseball, and basketball has recently become of concern, with potentially high commercial and research interest. The analysis of... A note on the triangle inequality for the Jaccard distance Two simple proofs of the triangle inequality for the Jaccard distance in terms of nonnegative, monotone, submodular functions are given and discussed. Global Evaluation for Decision Tree Learning Fabian Späh We transfer distances on clusterings to the building process of decision trees, and as a consequence extend the classical ID3 algorithm to perform modifications based on the global distance of the tree to the ground truth--instead of considering single leaves. Next, we evaluate this idea in comparison with the original version and discuss occurring... A Note on the complexity of manipulating weighted Schulze voting Julian Müller We prove that the constructive weighted coalitional manipulation problem for the Schulze voting rule can be solved in polynomial time for an unbounded number of candidates and an unbounded number of manipulators. Smoothed Analysis of Trie Height by Star-like PFAs Stefan Eckhardt Johannes Nowak Tries are general purpose data structures for information retrieval. The most significant parameter of a trie is its height $H$ which equals the length of the longest common prefix of any two string in the set $A$ over which the trie is built. Analytical investigations of random tries suggest that ${\bf E}(H)\in O(\log(\|A\|))$, although $H$ is unb... Textuelle Berechenbarkeit Die folgenden Überlegungen betreffen die Frage, inwieweit es möglich ist, Texte als Programme, Narrative als Algorithmen, Erzählungen als Berechnungen aufzufassen. Dabei wird auf das Textverständnis im Sinne der literaturwissenschaftlichen Erzähltheorie und auf Begrifflichkeiten der Berechenbarkeits- und Komplexitätstheorie aus der Theoretischen In... Dichotomy results for fixed point counting in boolean dynamical systems Christopher Homan We present dichotomy theorems regarding the computational complexity of counting fixed points in boolean (discrete) dynamical systems, i.e., finite discrete dynamical systems over the domain {0, 1}. For a class F of boolean functions and a class G of graphs, an (F,G)-system is a boolean dynamical system with local transitions functions lying in F a... Inequalities for the Number of Walks in Graphs Hanjo Täubig Jeremias Weihmann Ernst W. Mayr We investigate the growth of the number w_k of walks of length k in undirected graphs as well as related inequalities. In the first part, we deduce the inequality w_2a+c⋅w_2(a+b)+c ≤ w_2a⋅w_2(a+b+c), which we call the Sandwich Theorem. It unifies and generalizes an inequality by Lagarias et al. and an inequality by Dress and Gutman. In the same way... Raymond Hemmecke Was messen Zentralitätsindizes? Ulrik Brandes Bobo Nick Unser Fokus sind die theoretischen Grundlagen gängiger Methoden zur Bestimmung von Zentralität in Netzwerken. Combinatorial Network Abstraction by Trees and Distances Moritz G. Maass Sebastian Wernicke This work draws attention to combinatorial network abstraction problems which are specified by a class \(\mathcal{P}\) of pattern graphs and a real-valued similarity measure \(\varrho\) based on certain graph properties. For fixed \(\mathcal{P}\) and \(\varrho\), the optimization task on any graph G is to find a subgraph G′ which belongs to \(\math... The Boolean Hierarchy of NP-Partitions Klaus W. Wagner We introduce the boolean hierarchy of k-partitions over NP for k 3 as a generalization of the booelean hierarchy of sets (i.e., 2-partitions) over NP. Whereas the structure of the latter hierarchy is rather simple the structure of the boolean hierarchy of k-partitions over NP for k 3 turns out to be much more complicated. We establish the Embedding... Dichotomy Results for Fixed-Point Existence Problems for Boolean Dynamical Systems A complete classification of the computational complexity of the fixed-point existence problem for boolean dynamical systems, i.e., finite discrete dynamical systems over the domain {0, 1}, is presented. For function classes F and graph classes G, an (F, G)-system is a boolean dynamical system such that all local transition functions lie in F and t... We present dichotomy theorems regarding the computational complexity of counting fixed points in boolean (discrete) dynamical systems, i.e., finite discrete dynamical systems over the domain {0,1}. For a class F of boolean functions and a class G of graphs, an (F,G)-system is a boolean dynamical system with local transitions functions lying in F an... Smoothed Analysis of Trie Height Tries are very simple general purpose data structures for information retrieval. A crucial parameter of a trie is its height. In the worst case the height is unbounded when the trie is built over a set of $n$ strings. Analytical investigations have shown that the average heught under many random sources is logarithmic in $n$. Experimental studies o... Computational Analysis of Complex Systems: Discrete Foundations, Algorithms, and the Internet Inferring relevant sysem parameters from monitorable data is a fundamental requisite for handling large-scale socio-technical systems. In this thesis we address this set of problems both theoretically and application-oriented. We mathematically study discrete ex post models for dynamical sysems. A particular focus is on algorithms for identifying s... Acyclic Type-of-Relationship Problems on the Internet : An Experimental Analysis Benjamin Hummel An experimental study of the feasibility and accuracy of the acyclicity approach introduced in [14] for the inference of business relationships among autonomous systems (ASes) is provided. We investigate the maximum acyclic type-of-relationship problem: on a given set of AS paths, find a maximum-cardinality subset which allows an acyclic and valley... Acyclic Type-of-Relationship Problems on the Internet Moritz G. Maaß We contribute to the study of inferring commercial relationships between autonomous systems (AS relationships) from observable BGP routes. We deduce several forbidden patterns of AS relationships that impose a certain type of acyclicity on the AS graph. We investigate algorithms for solving the acyclic all-paths type-of-relationship problem, i.e.,... All-Pairs Ancestor Problems in Weighted Dags Matthias Baumgart Jan Griebsch This work studies (lowest) common ancestor problems in (weighted) directed acyclic graphs. We improve previous algorithms for the all-pairs representative LCA problem to O(n^2.575) by using fast rectangular matrix multiplication. We prove a first non-trivial upper bound of O( min {n^2 m, n^3.575 }) for the all-pairs all lowest common ancestors prob... The Complexity of Detecting Fixed-Density Clusters Klaus Holzapfel We study the complexity of finding a subgraph of a certain size and a certain density, where density is measured by the average degree. Let gamma: N -> Q be any density function, i.e., gamma is computable in polynomial time and satisfies gamma(k) 0 and has a polynomial-time algorithm for gamma=2 O(1/k). Cluster Computing and the Power of Edge Recognition Lane A. Hemaspaandra Although complexity theory already extensively studies path-cardinality-based restrictions on the power of nondeterminism, this paper is motivated by a more recent goal: To gain insight into how much of a restriction it is of nondeterminism to limit machines to have just one contiguous (with respect to some simple order) interval of accepting paths... The Complexity of Computing the Size of an Interval Given a p-order A over a universe of strings (i.e., a transitive, reflexive, antisymmetric relation such that if (x, y) is an element of A then |x| is polynomially bounded by |y|), an interval size function of A returns, for each string x in the universe, the number of strings in the interval between strings b(x) and t(x) (with respect to A), where... Local Density Actors in networks usually do not act alone. By a selective process of establishing relationships with other actors, they form groups. The groups are typically founded by common goals, interests, preferences or other similarities. Standard examples include personal acquaintance relations, collaborative relations in several social domains, and coali... NP-Partitions over Posets with an Application to Reducing the Set of Solutions of NP Problems The boolean hierarchy of k-partitions over NP for k 2 was introduced as a generalization of the well-known boolean hierarchy of sets. The classes of this hierarchy are exactly those classes of NPpartitions which are generated by nite labeled lattices. We extend the boolean hierarchy of NP-partitions by considering partition classes which are genera... Boolean NP-Partitions and Projective Closure When studying complexity classes of partitions we often face the situation that different partition classes have the same component classes. The projective closures are the largest classes among these with respect to set inclusion. In this paper we investigate projective closures of classes of boolean NP-partitions, i.e., partitions with components... Generic Separations and Leaf Languages Matthias Galota Heribert Vollmer In the early nineties of the previous century, leaf languages were introduced as a means for the uniform characterization of many complexity classes, mainly in the range between P (polynomial time) and PSPACE (polynomial space). It was shown that the separability of two complexity classes can be reduced to a combinatorial property of the correspond... Abstract We study the complexity of nding a subgraph of a certain size and a certain density, where density is measured by the average degree. Let : N ! Q+ be any density function, i.e., is computable in polynomial time and satises (k) k 1 for all k 2 N. Then -Cluster is the problem of deciding, given an undirected graph G and a natural number k, w... We study the complexity of finding a subgraph of a certain size and a certain density, where density is measured by the average degree. Let γ : ℕ → ℚ+ be any density function, i.e., γ is computable in polynomial time and satisfies γ(k) ≤ k − 1 for all k ∈ ℕ. Then γ-Cluster is the problem of deciding, given an undirected graph G and a natural number... Theoretische Grundlagen des Internets Angelika Steger Uniform Characterizations of Complexity Classes of Functions Heinz Schmitz We introduce a general framework for the denition of function classes. Our model, which is based on nondeterministic polynomial-time Turing transducers, allows uniform characterizations of FP, FP NP , FP NP [O(log n)], FP NP tt , counting classes (#P, #NP, #coNP, GapP, GapP NP ), optimization classes (maxP, minP, maxNP, minNP), promise classes (NPS... Boolean Partitions and Projective Closure When studying the complexity of partitions one often faces the situation that different partition classes have the same projection classes. The projectively closed classes are the greatest (with respect to set-inclusion) among these. In this paper we determine important partition classes that are projectively closed and we prove the rather surprisi... Persistent Computations We study computational effects of persistent Turing machines, independently introduced by Goldin and Wegner [GW98], and Kosub [Kos98]. Persistence is a mode of interaction which makes it possible to consider the computational behavior of a Turing machine as an infinite sequence of autonomous computations. We investigate different computability conc... We study the complexity of counting the number of elements in intervals of feasible partial orders. Depending on the properties that partial orders may have, such counting functions have different complexities. If we consider total, polynomial-time decidable orders then we obtain exactly the #P functions. We show that the interval size functions fo... Complexity and Partitions Computational complexity theory usually investigates the complexity of sets, i.e., the complexity of partitions into two parts. But often it is more appropriate to represent natural problems by partitions into more than two parts. A particularly interesting class of such problems consists of classification problems for relations. For instance, a bi... Types of Separability In this paper we demonstrate that the studies of structural properties of the boolean hierarchy of NP-partitions are not only worthwhile in their own, e.g., as a framework for capturing the complexity of classication problems but have interesting ties with other research in computational complexity: We discuss the relationships to the study of sepa... We introduce the boolean hierarchy of k-partitions over NP for k ≥ 3 as a generalization of the boolean hierarchy of sets (i.e., 2-partitions) over NP. Whereas the structure of the latter hierarchy is rather simple the structure of the boolean hierarchy of k-partitions over NP for k ≥ 3 turns out to be much more complicated. We formulate the Embedd... On NP-Partitions over Posets with an Application to Reducing the Set of Solutions of NP Problems The boolean hierarchy of k-partitions over NP for k ≥ 2 was introduced as a generalization of the well-known boolean hierarchy of sets. The classes of this hierarchy are exactly those classes of NP-partitions which are generated by finite labeled lattices. We refine the boolean hierarchy of NP-partitions by considering partition classes which are g... A Note on Unambiguous Function Classes Introduction Unambiguous computation according to UP has become a classical notion in computational complexity theory. Unambiguity is also used in a theorem of Wagner [14]. A set L is in P iff there are a set A 2 NP and a polynomial p such that for all x and y with jyj p(jxj), if (x; y) 2 A then (x; y Gamma 1) 2 A, and x 2 L iff the maximal y with... The Boolean Hierarchy of Partitions Allgemeine Systeme der Toleranzgruppenoptimierung Klaus-Peter Zocher Uniformly Defining Complexity Classes of Functions We introduce a general framework for the definition of function classes. Our model, which is based on polynomial time nondeterministic Turing transducers, allows uniform characterizations of FP, FP NP , counting classes (#DeltaP, #DeltaNP, #DeltacoNP, GapP, GapP NP ), optimization classes (maxDeltaP, minDeltaP, maxDeltaNP, minDeltaNP), promise clas... On Cluster Machines and Function Classes We consider a special kind of non-deterministic Turing machines. Cluster machines are distinguished by a neighbourhood relationship between accepting paths. Based on a formalization using equivalence relations some subtle properties of these machines are proven. Moreover, by abstraction we gain the machine-independend concept of cluster sets which... Predrag Tosic David Ron Karger Guy Kortsarz Samuel R. Buss Ramesh Govindan IBM Research Mitsunori Ogihara Daniel Medina Monumental Sports & Performance Murray Loew Network Data Analytics A unifying long-term project focusing all-encompassingly on algorithmic methods for network data in mathematical, theoretical, and practical perspectives. Network data (i.e., overlapping dyadic data) is collected in many different domains of empirical research, each of which equipped with specific methodologies. The challenge is to look at how researchers in their domains work with data computationally and to come up with founded algorithmic methods to support them in their daily work. A particular interest is in staggered processes (pipelines) of algorithmic data transformations observable in empirical studies (e.g., sequences of projections of incidence matrices on either side, distance/walk/similarity-based derivations, geometrical embeddings). An exemplary research goal is to establish interpretable pipelines for clusterings in networks. Tools for the Laws of Form A project based on the conjecture that a formal description of self-organization (notably, communication) can be based on calculi nullifying the difference of operator and operands. The famous, semigraphical calculus of forms (aka calculus of indications) of George Spencer Brown, which identifies distinction as an operation (cross) with distinction as the result of an operation (separated spaces), is such a calculus. Receiving criticisms from the computer science community for this indifference which apparently contradicts the principles of programming languages, it is nevertheless beneficial to see how the calculus can be used to describe systems, in particular, social (and socio-technical) systems. In the application-oriented part of the project, several tools are designed and implemented to support field work with Spencer Brown's forms: an automated tool for generating layouts of re-entry forms (in LaTeX) while optimizing the layout according to several criteria (like planarity or minimizing crossings) and apps understanding forms drawn on a tablet and generating code out of it. The theoretical part of the project is devoted to a complete form analysis and form synthesis. Existing studies only consider simple re-entry forms with just one re-entry. There are circular relations between the coding part (apps) and the theory part (theorems) which give rise to several questions involving machine learning and computer vision techniques. Sports Intelligence A mission to identify performance indicators, produce forecasts, and support decision making in the area of team sports using descriptive, predictive, or visual analytics and any kind of data available. A particular focus is on soccer, the beautiful game. For instance, on the basis of spatiotemporal data obtained from sensors in shoes, we are interested in methods to recognize, evaluate, and visualize all possible pass options for a ball-possessing player. This allows for the assessment of the quality of actually realized passes. Another idea we follow is the use of betting odds as "ground truth." Coming up with well-founded interaction models in team sports (considered to be the ultimate theoretical goal) is a challenging and yet unresolved task. Beloved inferring team strengths from collected match outcomes in the past is based on an information basis presumably too thin for both explanation and theory-building. Prediction markets like bookmakers or betting exchanges promise more enriched signals. Despite the well-studied tendency to information efficiency in financial markets, there has been, and still is, much discussion on biases in betting markets, e.g., the favourite-longshot bias in general or the draw bias in soccer. A clarification of possible bias structures is required. An opening of the project towards amateur and mass sports, organization of sport events, or fitness & health (quantified-self movement) is planned. Professor (Associate) Formlabor University of Wuerzburg Tobias Schreck Bioinformatics Center Department of Humanities, Social and Political Sciences Manuel Nagel Institute of Clinical Neurobiology Department of Efficient Algorithms www.heureka-solutions.com Alexander Jäger Bio & Environmental Technology
CommonCrawl
States of matter Class 11 Questions and Answers- Important for Exams Get states of matter class 11 important questions and answers for various exams.View the Important Question bank for Class 11 Chemistry and other subjects too. These important questions will play significant role in clearing concepts of Chemistry. These questions and answers are designed, keeping States of matter Class 11 syllabus as per NCERT in mind and the questions are updated with respect to upcoming Board exams. You will get here all the important questions for class 11 chemistry. Learn the concepts of States of matter and other topics of class 11 with these questions and answers. Click Here for Detailed Chapter-wise Notes of Chemistry for Class 11th, JEE & NEET. You can access free study material for all three subject's Physics, Chemistry and Mathematics. Click Here for Detailed Notes of any chapter. eSaral provides you complete edge to prepare for Board and Competitive Exams like JEE, NEET, BITSAT, etc. We have transformed classroom in such a way that a student can study anytime anywhere. With the help of AI we have made the learning Personalized, adaptive and accessible for each and every one. Visit eSaral Website to download or view free study material for JEE & NEET. Also get to know about the strategies to Crack Exam in limited time period. Q. What is triple point of a substance ? Ans. It is the point at which solid, liquid and vapour of a substance are in equilibrium with each other. For example, triple point of water is the temperature at which ice, water and its vapour coexist. Q. $H C l$ is gas while $H F$ is liquid at room temperature, why? Ans. HF molecules are associated with intermolecular $H-$ bonding, therefore, it is liquid whereas $H C l$ is gas because less Vander Waal's forces of attraction. Q. What is boiling point of water at (i) higher altitudes, (ii) in pressure cooker? Ans. $(i)<100^{\circ} C(\text { ii })>100^{\circ} C$ Q. Why is air denser at lower level than at higher altitudes ? Ans. Heavier air will come down and lighter air goes up. Air at lower level is denser since it is compressed by mass of air above it. Q. What is dry ice ? Why is it so called ? Ans. Solid $\mathrm{CO}_{2}$ It is because solid $\mathrm{CO}_{2}$ is directly converted into gaseous state (sublimes) and does not change into liquid, so it is called dry ice. Q. A rubber balloon permeable to hydrogen in all its isotopic forms is filled with deuterium $\left(D_{2}\right)$ and then placed in a box containing pure hydrogen. Will the balloon expand or contract or remains as it is? Ans. Balloon will expand because rate of diffusion of $H_{2}$ is greater than that of $D_{2}$ Q. What type of graph would you get when $P V$ is plotted against $P$ at constant temperature? Ans. A straight line parallel to pressure axis. Q. Name and state the law governing the expansion of gases when they are heated or cooled at constant pressure. Ans. Charle's Law. Q. At a certain altitude, the density of air is $1 / 10$ th of the density of the earth's atmosphere and temperature is $-10^{\circ} \mathrm{C} .$ What is the pressure at that altitude? Assume that air behaves like an ideal gas, has uniform composition and is at S.T.T. at the earth's surface. Ans. $\frac{P_{1} V_{1}}{T_{1}}=\frac{P_{2} V_{2}}{T_{2}}$ or $P_{2}=\frac{P_{1} V_{1} T_{2}}{T_{1} V_{2}}$ But $d \propto \frac{1}{V} .$ Hence $P_{2}=\frac{P_{1} T_{2}}{T_{1}}\left(\frac{d_{2}}{d_{1}}\right)$ $=\frac{760 \times 263}{273}\left(\frac{1}{10}\right)=73.2 \mathrm{mm}$ Q. How much time it would take to distribute one Avogadro's number of wheat grains, if $10^{10}$ grains are distributed each second? [NCERT] Ans. Number of years $=\frac{6.023 \times 10^{23}}{10^{10} \times 365 \times 24 \times 60 \times 60}=1,908,00$ years. Q. A chamber of constant volume contains hydrogen gas. When the chamber is immersed in a bath of melting ice $\left(0^{\circ} \mathrm{C}\right),$ the pressure of the gas is $1.07 \times 10^{2} \mathrm{kPa} .$ What pressure will be indicated when the chamber is brought to $100^{\circ} \mathrm{C} ?$ [NCERT] Q. What would be the S.I. unit for the quantity $p V^{2} T^{2} / n ?$ [NCERT] Ans. $\frac{p V^{2} T^{2}}{n}=\frac{\left(N m^{-2}\right)\left(m^{3}\right)^{2}(K)^{2}}{m o l}$ $=N m^{4} K^{2} m o l^{-1}$ Q. What is the value of universal gas constant ? What is its value in SI units ? Ans. $R=8.314 \mathrm{JK}^{-1} \mathrm{mol}^{-1}$ Q. Explain the significance of the vander Waal's parameters ? Ans. $^{c} a^{\prime}$ is a measure of the magnitude of the intermolecular forces of attraction while $b$ is a measure of the effective size of the gas molecules. Q. Give most common application of Dalton's law. Ans. The air pressure decreases with increases in altitude. That is why jet aeroplane flying at high altitude need pressurization of the cabin so that partial pressure of oxygen is sufficient for breathing. Q. $N_{2} O$ and $C O_{2}$ have the same rate of diffusion under same conditions of temperature and pressure. Why? Ans. Both have same molar mass $\left(=44 g m o l^{-1}\right)$. According to Graham's law of diffusion, rates of diffusion of different gases are inversely proportional to the square root of their molar masses under same conditions of temperature and pressure. Q. At what temperature will oxygen molecules have the same K.E. as ozone molecules at $30^{\circ} \mathrm{C} ?$ Ans. At $30^{\circ} \mathrm{C},$ kinetic energy depends only on absolute temperature and not on the identity of a gas. Q. Which two postulates of the kinetic molecular theory are only approximations when applied to real gases? Ans. (i) Inter molecular forces between molecules are negligible. (ii) Molecules of a gas have negligible volumes. Q. Account for the following properties of gases on the basis of kinetic molecular theory, (i) High compressibility (ii) Gases occupy whole of the volume available to them. Ans. (i) High compressibility is due to large empty spaces between the molecules. (ii) Due to absence of attractive forces between molecules, the molecules of gases can easily separate from one another. Q. Critical temperature of carbon dioxide and $C H_{4}$ are $31.1^{\circ} \mathrm{C}$ and $-81.9^{\circ} \mathrm{C}$ respectively. Which of these has stronger intermolecular forces and why? $\quad$ [NCERT] Ans. Higher the critical temperature, more easily the gas can be liquefied i.e. greater are the intermolecular forces of attraction. Therefore, $\mathrm{CO}_{2}$ has stronger intermolecular forces than $\mathrm{CH}_{4}$ Q. (i) What will be the pressure exerted by a mixture of $3.2 \mathrm{g}$ of methane and $4.4 \mathrm{g}$ of carbon dioxide contained in a $9 d m^{3}$ flask at $27^{\circ} \mathrm{C} ?$ (ii) Give two example of Covalent solids. OR (i) Calculate the total number of electrons present in 1.4 $g$ of nitrogen gas. (ii) Which of the two gases, ammonia and hydrogen chloride, will diffuse faster and by what factor? (iii) Why urea has sharp melting point but glass does not? Ans. $(\mathrm{i}) \mathrm{pCH}_{4}=\frac{n R T}{\mathrm{V}}=\frac{3.2}{16} \times \frac{0.0821 \mathrm{L} \operatorname{atm} \mathrm{K}^{-1} \mathrm{mol}^{-1} \times 300 \mathrm{K}}{9 \mathrm{L}}$$p C H_{4}=\frac{0.2 \times 24.63}{9}=\frac{4.926}{9}=0.547 \mathrm{atm}$ $p C H_{4}=\frac{0.2 \times 24.63}{9}=\frac{4.926}{9}=0.547 \mathrm{atm}$ (ii) Boron and silicon are example of Covalent solids. Or (i) $28 g$ of Nitrogen gas contains $2 \times 7 \times 6.023 \times 10^{23}$ electrons. $1+1$ $\begin{array}{llll}{1.4} & {g} & {\text { of }} & {\text { Nitrogen }} & {\text { gas }} & {\text { contains }}\end{array}$ $\frac{2 \times 7 \times 6.023 \times 10^{23}}{28} \times 1.4$ $=\frac{2 \times 7 \times 6.023 \times 10^{23}}{20}=\frac{84.32 \times 10^{23}}{20}$ $=4.2161 \times 10^{23}$ electrons (ii) $N H_{3}$ will diffuse faster $\frac{r_{N H_{3}}}{r_{H C l}}=\sqrt{\frac{36.5}{17}}=\sqrt{2.14}=1.46$ times faster. (iii) Urea is a crystalline solid therefore it has sharp melting point whereas glass does not have sharp melting point because it is amorphous, i.e., does not have regular three dimensional structure. Q. What will be the minimum pressure required to compress $500 d m^{3}$ of air at 1 bar to $200 d m^{3}$ at $30^{\circ} \mathrm{C} ? \quad Q. A manometer is connected to a gas containing bulb. The open arm reads 43.7 cm whereas the arm connected to the bulb reads 15.6 cm. If the barometric pressure is 743 mm mercury, what is the pressure of the bulb reads 15.6 cm. If the barometric pressure is 743 mm mercury, what is the pressure of the gas in bar ? [NCERT] Ans. Pressure of gas $=$ Atmospheric pressure $+$ Difference between levels of $\mathrm{Hg}$ $=743 \mathrm{mm}+(43.7 \mathrm{cm}-15.6 \mathrm{cm})=74.3 \mathrm{cm}+28.1 \mathrm{cm}$ $P=102.4 \mathrm{cm} P$ in bar $=\frac{102.4}{76}=1.347 \mathrm{bar}$ Q. Calculate the number of moles of hydrogen $\left(H_{2}\right)$ present in a $500 \mathrm{cm}^{3}$ sample of hydrogen gas at a pressure of $760 \mathrm{mm}$ Hg and $27^{\circ} \mathrm{C}$. [NCERT] Ans. $P V=n R T, \quad P=1$ atm, $\quad V=500 \mathrm{cm}^{3}, \quad n=?$ $R=82.1 \mathrm{atm} \mathrm{cm}^{3} \mathrm{K}^{-1} \mathrm{mol}^{-1}, T=300 \mathrm{K}$ $n=\frac{1 a t m, \times 500 \mathrm{cm}^{3}}{\left(82.1 \mathrm{atm} \mathrm{cm}^{3} \mathrm{K}^{-1} \mathrm{mol}^{-1}\right) \times(300 \mathrm{K})}=0.02 \mathrm{mol}$ Q. $34.05 \mathrm{ml}$ of phosphorus weight $0.0625 \mathrm{g}$ at $546^{\circ} \mathrm{Cand} 1$ bar pressure. What is the molar mass of phosphorus? Or In terms of Charle's law explain why $-273^{\circ} \mathrm{C}$ is the lowest possible temperature. [NCERT] Ans. $p V=n R T$ 1 bar $\times \frac{34.05}{1000} L=\frac{0.0625}{M} \times 0.083 \times 819 K$ $M=\frac{0.0625 \times 0.083 \times 819 K \times 1000}{34.05}=\frac{4248.5625}{34.05}$ $M=124.77 \mathrm{g} \mathrm{mol}^{-1}$ Or Charle's plotted the volume against temperature in $^{\circ} C$. These plots when extraplotted intersect the temperature axis at the same point $-273^{\circ} \mathrm{C} .$ He concluded that all gases at this temperature could have zero volume and below this temperature volume would be negative. It shows $-273^{\circ} \mathrm{C}$ is lowest temperature attainable. Q. A vessel of $120 \mathrm{mL}$ capacity contains a certain amount of gas at $35^{\circ} \mathrm{C}$ and 1.2 bar pressure. The gas is transferred to another vessel of volume $180 \mathrm{mL}$ at $35^{\circ} \mathrm{C} .$ What would be its pressure? [NCERT] Ans. Since temperature and amount of gas remains constant, therefore, Boyle's law is applicable. Q. A balloon is filled with hydrogen at room temperature. It will burst if pressure exceeds 0.2 bar. If at 1 bar pressure the gas occupies 2.27 L volume, upto what volume can the balloon be expanded ? [NCERT] Ans. According to Boyle's law, at constant temperature, Since balloon bursts at 0.2 bar pressure, the volume of the balloon should be less than 11.35 L. Q. A student forgot to add the reaction mixture to the round bottomed flask at $27^{\circ} \mathrm{C}$ but instead, he/she placed the flask on the flame. After a lapse of time, he realized his mistake, and using a pyrometer, he found the temperature of the flask was $477^{\circ} \mathrm{C} .$ What fraction of air would have been expelled out? [NCERT] Ans. Suppose volume of vessel $=V c m^{3}$ i.e., volume of air in the flask at $27^{\circ} \mathrm{C}=\mathrm{Vcm}^{3}$ $\frac{V_{1}}{T_{1}}=\frac{V_{2}}{T_{2}}, \quad$ i.e., $\quad \frac{V}{300}=\frac{V_{2}}{750} \quad$ or $\quad V_{2}=2.5 \mathrm{V}$ $\therefore$ Volume expelled $=2.5 \mathrm{V}-\mathrm{V}=1.5 \mathrm{V}$ Fraction of air expelled $=\frac{1.5 \mathrm{V}}{2.5 \mathrm{V}}=\frac{3}{5}$ Q. What is the difference in pressure between the top and bottom of a vessel $76 \mathrm{cm}$ deep at $27^{\circ} \mathrm{C}$ when filled with (i) water (ii) mercury? Density of water at $27^{\circ} \mathrm{C}$ is $0.990 \mathrm{g} \mathrm{cm}^{-3}$ and that of mercury is $13.60 \mathrm{g} \mathrm{cm}^{-3}$ Ans. Pressure $=$ height $\times$ density $\times g$ Case (i). Pressure $=76 \mathrm{cm} \times 0.99 \mathrm{g} / \mathrm{cm}^{3} \times 981 \mathrm{cm} / \mathrm{s}^{2}$ $=7.38 \times 10^{4}$ dynes $\mathrm{cm}^{-2}$ $=0.073$ atm $\left(1 \mathrm{atm}=1.013 \times 10^{6} \text { dynes } \mathrm{cm}^{-2}\right)$ Case (ii). Pressure $=76 \mathrm{cm} \times 13.6 \mathrm{g} / \mathrm{cm}^{3} \times 981 \mathrm{cm} / \mathrm{s}^{2}$ $=1.013 \times 10^{6}$ dynes $\mathrm{cm}^{-2}=1 \mathrm{atm}$ Q. An iron cylinder contains helium at a pressure of $250 \mathrm{kPa}$ at $300 K .$ The cylinder can withstand a pressure of $1 \times 10^{6}$ pa. Theroom in which cylinder is placed catches fire. Predict whether the cylinder will blow up before it melts or not. (M.P. of the cylinder $=1800 K$ ) Q. On a ship sailing in a pacific ocean where temperature is $23.4^{\circ} \mathrm{C},$ a balloon is filled with $2 \mathrm{L}$ air. What will be the volume of the balloon when the ship reaches Indian ocean, where temperature is $26.1^{\circ} \mathrm{C} ? \quad$ [NCERT] Ans. According to Charles' law $\frac{V_{1}}{T_{1}}=\frac{V_{2}}{T_{2}}$ $V_{1}=2 L$ $V_{2}=?$ $T_{1}=273+23.4=296.4 \mathrm{K} \quad T_{2}=273+26.1=299.1$ $\therefore V_{2}=\frac{V_{1} T_{2}}{T_{1}}=\frac{2 L \times 299.1 K}{296.4 K}=2.018 L$ Q. What is the increase in volume when the temperature of 800 $m L$ of air increases from $27^{\circ} \mathrm{C}$ to $47^{\circ} \mathrm{Cunder}$ constant pressure of $1 \mathrm{bar} ?$ [NCERT] Ans. Since the amount of gas and the pressure remains constant, Charles' law is applicable. i.e. $\frac{V_{1}}{T_{1}}=\frac{V_{2}}{T_{2}}$ $V_{1}=800 \mathrm{mL}$ $V_{2}=?$ $T_{1}=273+27=300 K ; \quad T_{2}=273+47=320 K$ $\frac{800 m L}{300 K}=\frac{V_{2}}{320 K}$ or $\quad V_{2}=\frac{(800 \mathrm{mL})}{(300 \mathrm{K})} \times(320 \mathrm{K})=853.3 \mathrm{mL}$ Increase in volume of air $=853.3-800=53.3 \mathrm{mL}$ Q. A cylinder containing cooking gas can withstand a pressure of 14.9 atmospheres. The pressure gauge of the cylinder indicates 12 atmosphere at $27^{\circ} \mathrm{C} .$ Due to sudden fire in the building, temperature starts rising. At what temperature will the cylinder explode? Ans. since gas is confined in a cylinder, its volume will remain constan Q. What will be the pressure exerted by a mixture when $0.5 L$ of $H_{2}$ at 0.8 bar and $2.0 L$ of oxygen at 0.7 bar are introduced in 1 container at $27^{\circ} \mathrm{C}$. [NCERT] Q. Calculate the temperature of 4.0 moles of a gas occupying $6 d m^{3}$ at 3.32 bar $\left(R=0.083 \text { bar } d m^{3} K^{-1} \mathrm{mol}^{-1}\right)$ [NCERT] Ans. According to ideal gas equation: $p V=n R T$ $p=3.32$ bar, $V=5 d m^{3}, n=4.0 \mathrm{mol}$ $R=0.083$ bar $d m^{3} K^{-1} \mathrm{mol}^{-1}$ $T=\frac{p V}{n R}=\frac{3.32 \text { bar } \times 5 d m^{3}}{4.0 \mathrm{mol} \times 0.083 \text { bar } d m^{3} K^{-1} \mathrm{mol}^{-1}}=50 \mathrm{K}$ Q. When $2 g$ of gas $A$ is introduced into an evacuated flask kept at $25^{\circ} \mathrm{C},$ the pressure is found to be 1 atm. If $3 \mathrm{g}$ of another gas $B$ is then added to the same flask the total pressure becomes 1.5 atm at the same temperature. Assuming ideal behaviour of gases, calculate the ratio of molecular weights $M_{A}: M_{B}$ Ans. According to ideal gas equation, Q. An evacuated glass vessel weighs $50.0 \mathrm{g}$ when empty, $148.0 \mathrm{g}$ when filled with a liquid of density $0.98 \mathrm{g} \mathrm{m}^{-1}$ and $50.5 \mathrm{g}$ when filled with an ideal gas at $760 \mathrm{mm} \mathrm{Hg}$ at $300 \mathrm{K}$. Determine the molecular weight of the gas.b Ans. Weight of liquid $=148-50=98 g$ Density of liquid $=0.98 g m l^{-1}$ $\therefore$ Volume of liquid $=\frac{98}{0.98}=100 \mathrm{ml}$ Weight of gas $=50.5-50.0=0.5 g$ Volume $=\frac{100}{1000} L, P=1$ atm, $T=300 K$ $p V=n R T$ or $p V=\frac{W}{M} R T$ $\frac{1 \times 100}{1000}=\frac{0.5}{M} \times 0.082 \times 300$ $\therefore M=123.15 g \mathrm{mol}^{-1}$ Q. At $0^{\circ} \mathrm{C},$ the density of a gaseous oxide at 2 bar pressure is same as that of nitrogen at 5 bar. What is the molecular mass of the gaseous oxide? $\quad$ [NCERT] Ans. Calculation of density of $N_{2}$ at 5 bar and $0^{\circ} \mathrm{C}$ $d=\frac{P M}{R T}=\frac{5(\text { bar }) \times 28\left(g m o l^{-1}\right)}{0.0831\left(L \text { bar } K^{-1} \mathrm{mol}^{-1}\right) \times 273.15(K)}$ $=6.168 g L^{-1}$ Calculation of molar mass of gaseous oxide $M=\frac{d R T}{P}$ $=\frac{6.168\left(g L^{-1}\right) \times 0.0831\left(L \text { bar } K^{-1} \mathrm{mol}^{-1}\right) \times 273.15(K)}{2}$ $=70.0 \mathrm{g} \mathrm{mol}^{-1}$ Q. Density of a gas is found to be $5.46 g / d m^{3}$ at $27^{\circ} \mathrm{C}$ and 2 bar pressure. What will be its density at S.T.P.? [NCERT] Q. What will be pressure exerted by a mixture of $3.2 \mathrm{g}$ of methane and $4.4 \mathrm{g}$ of carbon dioxide contained in a $9 \mathrm{dm}^{3}$ flask at $27^{\circ} \mathrm{C} ?$ Ans. $p=\frac{n}{V} R T=\frac{w}{M} \frac{R T}{V}$ $p_{C H_{4}}=\left(\frac{3.2}{16} m o l\right) \frac{0.0821 d m^{3} a t m K^{-1} m o l^{-1} \times 300 K}{9 d m^{3}}$ $=0.55 \mathrm{atm}$ $p_{\mathrm{CO}_{2}}=\left(\frac{4.4}{44} \mathrm{mol}\right) \frac{0.0821 \mathrm{dm}^{3} \mathrm{atm} \mathrm{K}^{-1} \mathrm{mol}^{-1} \times 300 \mathrm{K}}{9 \mathrm{dm}^{3}}$ $=0.27 \mathrm{atm}$ $p_{\text {Total }}=0.55+0.27=0.82 \mathrm{atm}$ Q. Calculate the total pressure in a mixture of $8 g$ of $O_{2}$ and $4 g$ of $H_{2}$ confined in a vessel of $d m^{3}$ at $27^{\circ} C, R=0.083$ bar $d m^{3} K^{-1} \mathrm{mol}^{-1} .$ Atomic mass of $O=16 u, H=1 u$ [NCERT] Ans. $p V=n R T$ $p \times 1=\frac{8}{32} \times 0.083 \times 300=\frac{24.9}{4}=6.225$ bar $p \times 1=\frac{4}{2} \times 0.083 \times 300=49.80 \mathrm{bar}$ Total Pressure $=6.225+49.80=56.025$ bar Q. For 10 minutes each, at $27^{\circ} \mathrm{C}$, from two identical holes, nitrogen and an unknown gas are leaked into common vessel of 3 L capacity. The resulting pressure is 4.18 bar and the mixture contains 0.4 mole of nitrogen. What is the molar mass of the unknown gas ? [NCERT] Q. Pay load is defined as the difference between the mass of displaced air and the mass of the balloon. Calculate the pay load when a balloon of radius $10 \mathrm{m},$ mass $100 \mathrm{kg}$ is filled with helium at 1.66 bar at $27^{\circ} \mathrm{C}$. (Density of air $=1.2$ $\left.k g m^{-3} \text { and } R=0.083 \text { bar } d m^{3} K^{-1} \mathrm{mol}^{-1}\right)$ [NCERT] Ans. Pay load is the difference between the mass of displaced air and the mass of the balloon. Volume of ballon $=\frac{4}{3} \pi r^{3}$ Radius of balloon, $r=10 \mathrm{m}$ $V=\frac{4}{3} \times 3.14 \times(10)^{3}=4186.7 \mathrm{m}^{3}$ Mass of displaced air $=4186.7 \mathrm{m}^{3} \times 1.2 \mathrm{kg} \mathrm{m}^{-3}$ $=5024.04 \mathrm{kg}$ Moles of gas present $=\frac{p V}{R T}$ $=\frac{1.66 \times 4186.7 \times 10^{3}}{0.083 \times 300}=279.11 \times 10^{3} \mathrm{mol}$ Mass of helium present $=279.11 \times 10^{3} \times 4$ $=1116.44 \times 10^{3} g=1116.4 \mathrm{kg}$ Mass of filled balloon $=100+1116.4=1216.4 \mathrm{kg}$ Pay load $=$ mass of displaced air - Mass of balloon $=5024.4-1216.44=3807.6 \mathrm{kg}$ Q. A balloon of diameter $20 \mathrm{m}$ weights $100 \mathrm{kg} .$ Calculate the pay load if it is filled with $H e$ at 1.1 atm and $27^{\circ} \mathrm{C}$. Density of air is $1.2 \mathrm{kg} \mathrm{m}^{-3}\left(\mathrm{R}=0.082 \mathrm{dm}^{3} \mathrm{atm} \mathrm{K}^{-1} \mathrm{mol}^{-1}\right)$ Ans. Volume of balloon, $V=\frac{4}{3} \pi r^{3} r=10 m$ $\therefore V=\frac{4}{3} \times 3.14 \times(10)^{3}=4187 \mathrm{m}^{3}$ Mass of displaced air $=4187 \mathrm{m}^{3} \times 1.2 \mathrm{kg} \mathrm{m}^{-3}$ $=5024.44 \mathrm{kg}$ Moles of gas present, $n=\frac{P V}{R T}=\frac{1 \times 4187 \times 10^{3}}{0,082 \times 298}$ $=171.3 \times 10^{3} \mathrm{mol}$ Mass of He present $=171.3 \times 10^{3} \times 4$ $=685.3 \times 10^{3} g$ $=685.3 \mathrm{kg}$ Mass of filled balloon $=100+685.3=785.3 \mathrm{kg}$ Pay load $=$ Mass of displaced air - Mass of balloon $=5024.4-785.3=4239.1 \mathrm{kg}$ Q. Using vander Waal's equation, calculate the constant " $a$ ' when two moles of a gas confined in a $4 L$ flask exerts a pressure of 11.0 atmospheres at a temperature of $300 K .$ The value of $b$ is $0.05 L \mathrm{mol}^{-1}$ Ans. $\left(p+\frac{a n^{2}}{V^{2}}\right)(V-n b)=n R T$ $n=2 \mathrm{mol}, V=4 L, p=11 \mathrm{atm}$ $T=300 K, b=0.05 L m o l^{-1}$ Substituting the values $\left(11+\frac{a \times 4}{16}\right)(4-2 \times 0.05)=2 \times 0.082 \times 300$ $\frac{176+4 a}{16} \times 3.9=49.2$ $(176+4 a) 3.9=49.2 \times 16$ $15.6 a=787.2-686.4$ $a=6.4616 \mathrm{atm} \mathrm{L}^{2} \mathrm{mol}^{-2}$ Q. $34.05 \mathrm{mL}$ of phosphorus vapour weigh $0.0625 \mathrm{g}$ at $546^{\circ} \mathrm{C}$ and 1 bar pressure. What is the molar mass of phosphorus? [NCERT] Ans. Mole of phosphorus vapour $(n)=\frac{P V}{R T}$ $=\frac{1(\text {bar}) \times 34.05 \times 10^{-3}(L)}{0.0831\left(\text {bar } L K^{-1} \mathrm{mol}^{-1}\right) \times 819.15(K)}=5.0 \times 10^{-4}$ Let molar mass of phosphorus be $M g m o l^{-1}$ $\therefore$ Mole of phosphorus vapour $=\frac{0.0625}{M}$ Now, $\frac{0.0625}{M}=5.0 \times 10^{-4}$ or $M=\frac{0.0625}{5.0 \times 10^{-4}}$ $=125 g \mathrm{mol}^{-1}$ Q. The drain cleaner, Drainex contain small bits of aluminium which react with caustic soda to produce hydrogen. What volume of hydrogen at $20^{\circ} \mathrm{C}$ and 1 bar will be released when $0.15 g$ of aluminium reacts? [NCERT] Ans. The chemical reaction taking place is Q. For a real gas obeying vander Waal's equation, a graph is plotted between $P V_{m}(y \text { -axis })$ and $P(x \text { -axis })$ where $V_{m}$ is the molar volume. Find $y$ -intercept of the graph. Ans. For a real gas, the plot of $P V_{m}$ vs $P$ can be of the type $A$ or $B$ but at the point of intercept, $P=0$ and at any low pressure, vander Waal's equation reduce to ideal gas equation. $\mathrm{PV}=\mathrm{nRT}$ or $\mathrm{PV}_{\mathrm{m}}=\mathrm{RT}$ Hence, $y$ -intercept of graph will be $=R T$ Q. (i) State Graham's law of diffusion. Arrange the gases $\mathrm{CO}_{2}, \mathrm{SO}_{2}, \mathrm{NO}_{2}$ in order of increasing rates of diffusion. (ii) What is the volume of $0.300 \mathrm{mol}$ of ideal gas at $60^{\circ} \mathrm{C}$ and 0.821 atm pressure? Ans. (i) For statement of the Graham's law. Molar mass of: $\mathrm{CO}_{2}=44 \mathrm{u} ; \mathrm{SO}_{2}=64 \mathrm{u}$ and $\mathrm{NO}_{2}=46 \mathrm{u}$ As $r_{d i f f} \propto \frac{1}{\sqrt{M}},$ therefore, larger the molar mass lesser will be the rate of diffusion under similar condition. Thus, increasing order of rates of diffusion is $r_{S O_{2}}< div> Q. Through two ends of a glass tube of length $200 \mathrm{cm} \mathrm{HCl}$ and $N H_{3}$ gases are allowed to enter. At what distance ammonium chloride will first appear? [NCERT] Ans. $\frac{r_{N H_{3}}}{r_{H C l}}=\frac{l_{1}}{200-l_{1}}=\sqrt{\frac{M_{H C l}}{M_{N H_{3}}}}=\sqrt{\frac{36.5}{17}}$ $\frac{l_{1}}{200-l_{1}}=\sqrt{2.147}=1.465$ $l_{1}=293-1.465 l_{1} ; 2.465 l_{1}=293$ $l_{1}=\frac{293}{2.465}=118.88 \mathrm{cm}$ $200-l_{1}=200-118.88=81.12 \mathrm{cm}$ from $\mathrm{HCl}$ end. Q. Equal volumes of two gases A and B diffuse through a porous pot in 20 and 10 seconds respectively. If the molar mass of A be 80, find the molar mass of B. [NCERT] Ans. $\frac{r_{A}}{r_{B}}=\sqrt{\frac{M_{B}}{M_{A}}}$ or $\frac{t_{B}}{t_{A}}=\sqrt{\frac{M_{B}}{M_{A}}}$ $t_{A}=20$ sec, $t_{B}=10$ sec, $M_{A}=80, M_{B}=?$ $\frac{10}{20}=\sqrt{\frac{M_{B}}{80}}$ $M_{B}=\frac{80}{4}=20 g \mathrm{mol}^{-1}$ Q. A mixture of hydrogen and oxygen at one bar pressure contains 20% by weight of hydrogen. Calculate the partial pressure of hydrogen. [NCERT] Ans. As the mixture $H_{2}$ and $O_{2}$ contains $20 \%$ by weight of hydrogen, therefore, if $H_{2}=20 g,$ then $O_{2}=80 g$ $n_{H_{2}}=\frac{20}{2}=10$ moles, $n_{O_{2}}=\frac{80}{32}=2.5$ moles $p_{H_{2}}=\frac{n_{H_{2}}}{n_{H_{2}}+n_{O_{2}}} \times P_{\text {total }}=\frac{10}{10+2.5} \times 1$ bar $=0.8$ bar. Q. Calculate the total pressure in a mixture of $8 g$ of oxygen and $4 g$ of hydrogen confined in a vessel of $1 d m^{3}$ at $27^{\circ} \mathrm{C}$ $\left(R=0.083 \text { bar } d m^{3} K^{-1} \mathrm{mol}^{-1}\right)$ [NCERT] Ans. Partial pressure of oxygen gas, $p=\frac{n R T}{V}$ $n=\frac{8}{32}$ mol, $\quad \mathrm{V}=1 \mathrm{dm}^{3}, T=300 \mathrm{K}$ $p_{\left(O_{2}\right)}=\frac{8 \times 0.083 \times 300}{32 \times 1}=6.225$ bar Partial pressure of hydrogen gas $p=\frac{n R T}{V}$ $n=\frac{4}{2}=2 m o l$ $p\left(H_{2}\right)=\frac{2 \times 0.083 \times 300}{1}=49.8 \mathrm{bar}$ Total pressure $=p_{\left(O_{2}\right)}+p_{\left(H_{2}\right)}$ $=6.225+49.8=56.025$ bar Q. A vessel of $1.00 \mathrm{dm}^{3}$ capacity contains $8.00 \mathrm{g}$ of oxygen and $4.00 \mathrm{g}$ of hydrogen at $27^{\circ} \mathrm{C} .$ Calculate the partial pressure of each gas and also the total pressure in the container $\left(R=0.083 \text { bar } d m^{3} K^{-1} \mathrm{mol}^{-1}\right)$ [NCERT] Ans. Let the partial pressure of hydrogen be $P_{H_{2}}$ and the partial pressure of oxygen be $P_{O_{2}}$ The number of mole of hydrogen $\left(n_{1}\right)=\frac{4}{2}=2$ mole The number of mole of oxygen $\left(n_{2}\right)=\frac{8}{32}=0.25$ mole Now, applying ideal gas equation for each gas $p_{H_{2}} \times V=n_{1} R T$ $p_{H_{2}}=\frac{n_{1} R T}{V}=\frac{2 \times 0.083 \times 300}{1}=49.8$ bar Similarly, $p_{O_{2}} V=n_{2} R T$ $p_{O_{2}}=\frac{n_{2} R T}{V}=\frac{0.25 \times 0.083 \times 300}{1}=6.225 \mathrm{bar}$ Total pressure of gaseous mixture $=p_{H_{2}}+p_{O_{2}}=49.8+6.225=56.025$ bar. Q. Calculate (i) Average kinetic energy of $32 g$ of methane molecules at $27^{\circ} \mathrm{C} R=8.314 \mathrm{JK}^{-1} \mathrm{mol}^{-1}$ (ii) Root mean square speed and (ii) Most probable speed of methane molecule at $27^{\circ} \mathrm{C}$ Ans. (i) Average kinetic energy is given as $E_{k}=\frac{3 n R T}{2}$ Here, $n=\frac{32}{16}=2 ; R=8.314 \mathrm{JK}^{-1} \mathrm{mol}^{-1} ; T=300 \mathrm{K}$ $\therefore E_{k}=\frac{3 \times 2 \times 8.314 \times 300}{2}=7482.6 \mathrm{J}$ (ii) Root mean square speed is given as $u_{r m s}=\sqrt{\frac{3 R T}{M}}$ Here use $R=8.314 \times 10^{7}$ ergs $K^{-1} \mathrm{mol}^{-1}$ to get speed in $\mathrm{cm} \mathrm{s}^{-1}$ $u_{m s}=\sqrt{\frac{3 \times 8.314 \times 10^{7} \times 300}{16}}=68385.85 \mathrm{cm} s^{-1}=68.38 \mathrm{ms}^{-1}$ (iii) Most probable speed $(\alpha)=\frac{u_{r m s}}{1.224}=\frac{683.8}{1.224}=558.7 \mathrm{ms}^{-1}$ Q. Calculate root mean square speed $(r m s)$ of ozone which is kept in a closed vessel at $20^{\circ} \mathrm{C}$ and a pressure of $82 \mathrm{mm}$ of $H g$ Ans. Volume occupied by $1 \mathrm{mol}$ of $\mathrm{O}_{3}$ at $20^{\circ} \mathrm{C}$ and $82 \mathrm{mm}$ pressure is calculated by applying general gas equation, $\frac{P_{1} V_{1}}{T_{1}}=\frac{P_{2} V_{2}}{T_{2}}$ $\therefore \quad V_{2}=\frac{P_{1} V_{1} T_{2}}{T_{1} P_{2}}=\frac{76 \times 22400 \times 293}{273 \times 82}=22281.92 \mathrm{cm}^{3}$ Now, $u=\sqrt{\frac{3 P V}{M}}$ Here, we use $P$ in dyne/cm$^{2}, P=82 \times 13.6 \times 981$ dyne/cm$^{2}$ $\therefore u=\sqrt{\frac{3 \times 82 \times 13.6 \times 981 \times 22281.92}{48}}=3.90 \times 10^{4} \mathrm{cm} \mathrm{s}^{-1}$ Q. (i) Explain Boyle's law with the help of Kinetic theory of gases. (ii) An open beaker at $27^{\circ} \mathrm{C}$ is heated to $477^{\circ} \mathrm{C} .$ What fraction of air would have been expelled out? [NCERT] Ans. (i) The kinetic theory of gases assumes that pressure of gas is due to collision of gas molecules with the walls of the container. The more will be frequency of collision, more will be pressure. The reduction in volume of gas increases no. of molecules per unit volume to which pressure is directly proportional. Therefore, the volume of the gas is reduced if pressure is increased or pressure is inversely proportional to volume. $\frac{1}{2} m u^{2}=\frac{3}{2} k T$ $p=\frac{1}{3} \frac{N}{V} m u^{2}$ or $p=\frac{2}{3} \frac{N}{V} \times \frac{1}{2} m u^{2}=\frac{2}{3} \frac{N}{V} \times \frac{3}{2} k T$ $\Rightarrow p V=N k T \Rightarrow P \alpha \frac{1}{V}$ It can be seen that at a constant temperature for a fixed number of gas molecules, the pressure is inversely proportional to volume. (ii) $\frac{T_{1}}{T_{2}}=\frac{n_{1}}{n_{2}} \quad \begin{array}{cc}{T_{1}=27^{\circ} C+273} & {=300 K} \\ {T_{2}=477+273=} & {750 K}\end{array}$ $\frac{300 K}{750 K}=\frac{n_{1}}{n_{2}}, \quad \frac{n_{1}}{n_{2}}=\frac{2}{5} b v$ fraction of air escaped $=1-\frac{2}{5}=\frac{3}{5}$ Q. At which temperature average velocity of oxygen molecules is equal to the rms velocity at $27^{\circ} \mathrm{C} ?$ Ans. Average velocity $=\sqrt{\frac{8 R T}{\pi M}}$ Root mean square velocity $=\sqrt{\frac{3 R T}{M}}=\sqrt{\frac{3 R \times 300 K}{M}}$ For equal values, $\sqrt{\frac{8 R T}{\pi M}}=\sqrt{\frac{3 R \times 300}{M}}$ or $\quad \frac{8 R T}{\pi M}=\frac{3 R \times 300}{M}$ or $\frac{8 T}{\pi}=900$ or $\quad T=353.57 K=80.57^{\circ} \mathrm{C}$ Q. At a constant temperature, a gas occupies a volume of 200 mL at a pressure of 0.720 bar. It is subjected to an external pressure of 0.900 bar. What is the resulting volume of the gas ? [NCERT] Ans. Boyle's law is applicable as the amount and temperature are unaltered $p_{1} V_{1}=p_{2} V_{2}$ or $p_{1} / p_{2}=V_{2} / V_{1}$ 'Substituting the values 0.720 bar/ 0.900 bar $=V_{2} / 200 \mathrm{mL}$ $V_{2}=\frac{720}{900} \times 200 m L=160 m L$ Boyle's law is manifested in the working of many devices used in daily life such as cycle pump, aneroid barometer and tyre pressure gauge etc. Q. Calculate the number of nitrogen molecules present in 2.8 g of nitrogen gas. [NCERT] Ans. Number of moles of nitrogen $=2.8 g / 28 g \mathrm{mol}^{-1}=0.1 \mathrm{mol}$ Number of nitrogen molecules $=0.1 \mathrm{mol} \times 6.022 \times 10^{23}$ $m o l^{-1}=6.022 \times 10^{22}$ Q. If the density of a gas at the sea level at $0^{\circ} \mathrm{C}$ is $1.29 \mathrm{kg} \mathrm{m}^{-3}$, what is its molar mass? (Assume that pressure is equal to 1 bar) [NCERT] Ans. $p V_{m}=R T$ or $p M / d=R T$ or $M=d R T / p$ $=\frac{1.29 \mathrm{kg} m^{-3} \times 8.314 \mathrm{NmK}^{-1} \mathrm{mol}^{-1} \times 273.15 \mathrm{K}}{1.0 \times 10^{5} \mathrm{Nm}^{-2}(\mathrm{or} P a)}$ $=\frac{1.29 \times 8.314 \times 273.15 k g m o l^{-1}}{1 \times 10^{5}}$ $=0.0293 \mathrm{kg} \mathrm{mol}^{-1}$ or molar mass is $29.3 \mathrm{g} \mathrm{mol}^{-1}$ Q. Which of the two gases, ammonia and hydrogen chloride, will diffuse faster and by what factor ? [NCERT] Ans. $r_{N H_{3}} / r_{H C 1}=\left(M_{H C 1} / M_{N H_{3}}\right)^{1 / 2}$ $=(36.5 / 17)^{1 / 2}=1.46$ or $r_{N H_{3}}=1.46 r_{H C 1}$ Thus ammonia will diffuse 1.46 times faster than hydrogen chloride gas. Q. A 2.5 flask contains 0.25 mol each of sulphur dioxide, and nitrogen gas at $27^{\circ} \mathrm{C}$. Calculate the partial pressure exerted by each gas and also the total pressure. [NCERT] Ans. Partial pressure of sulphur dioxide, $p_{S O_{2}}=n R T / V$ $=\frac{0.25 \mathrm{mol} \times 8.314 \mathrm{JK}^{-1} \mathrm{mol}^{-1} \times 300 \mathrm{K}}{2.5 \times 10^{-3} \mathrm{m}^{3}}$ $=2.49 \times 10^{5} \mathrm{Nm}^{-2}=2.49 \times 10^{5} \mathrm{Pa}$ Similarly $p_{N_{2}}=2.49 \times 10^{5} \mathrm{Pa}$ Following Dalton's law $p_{\text {Total }}=p_{N_{2}}+p_{S O_{2}}$ $=2.49 \times 10^{5} P a+2.49 \times 10^{5} P a=4.98 \times 10^{5} P a$ Q. An open vessel at $27^{\circ} \mathrm{C}$ is heated until $\frac{3}{5}$ parts of the air in it has been expelled. Assuming that the volume of the vessel remains constant, find the temperature to which the vessel has been heated. Ans. As the vessel is open, pressure and volume remain constant. Thus, if $n_{1}$ moles are present at $T_{1}$ and $n_{2}$ moles are present at $T_{2},$ we can write $P V=n_{1} R T_{1} ; P V=n_{2} R T_{2}$ Hence, $n_{1} R T_{1}=n_{2} R T_{2}$ or $n_{1} T_{1}=n_{2} T_{2}$ or, $\quad \frac{n_{1}}{n_{2}}=\frac{T_{2}}{T_{1}}$ Suppose the no. of moles of air originally present $=n$ After heating, no. of moles of air expelled $=\frac{3}{5} n$ $\therefore$ No. of moles left after heating $=n-\frac{3}{5} n=\frac{2}{5} n$ Thus, $n_{1}=n, T_{1}=300 K ; n_{2}=\frac{2}{5} n, T_{2}=?$ $\frac{n}{\frac{2}{5} n}=\frac{T_{2}}{300}$ or $\frac{5}{2}=\frac{T_{2}}{300}$ or $, T_{2}=750 \mathrm{K}$ Alternatively, suppose the volume of the vessel $=V$ i.e. Volume of air initially at $27^{\circ} C=V$ Volume of air expelled $=\frac{3}{5} V$ $\therefore$ Volume of air left at $27^{\circ} \mathrm{C}=\frac{2}{5} \mathrm{V}$ However, on heating to $T^{\circ} K,$ it would become $=V$ As pressure remains constant, (vessel being open), $\frac{V_{1}}{T_{1}}=\frac{V_{2}}{T_{2}}$ i.e. $\frac{2 / 5 \mathrm{V}}{300 \mathrm{K}}=\frac{\mathrm{V}}{T_{2}}$ or $T_{2}=750 \mathrm{K}$ Q. A glass bulb contains $2.24 L$ of $H_{2}$ and $1.12 L$ of $D_{2}$ at S.T.P. It is connected to fully evacuated bulb by a stopcock with a small opening. The stopcock is opened for sometime and then closed. The first bulb now contains $0.10 \mathrm{g}$ of $D_{2} .$ Calculate the percentage composition by weight of the gases in the second bulb. Ans. Weight of $2.24 \mathrm{L}$ of $\mathrm{H}_{2}$ at $\mathrm{S} . T . P .=0.2 \mathrm{g}$ (Mol. mass of $\left.H_{2}=2\right)=0.1 \mathrm{mol}$ Weight of $1.12 \mathrm{L}$ of $D_{2}$ at $S . T . P .=0.2 \mathrm{g}$ (Mol. mass of $\left.D_{2}=4\right)=0.05 \mathrm{mol}$ As number of moles of two gases are different but V and T are same, therefore, their partial pressures will be different, i.e., in the ratio of their number of moles. Thus, $\frac{P_{H_{2}}}{P_{D_{2}}}=\frac{n_{H_{2}}}{n_{D_{2}}}=\frac{0.1}{0.5}=2$ Now, $D_{2}$ present in the first bulb $=0.1 g$ (Given) $D_{2}$ diffused into the second bulb $=0.2-0.1=0.1 g=0.56 L$ at $S . T . P .$ Now, $\frac{r_{H_{2}}}{r_{D_{2}}}=\frac{P_{H_{2}}}{P_{D_{2}}} \times \sqrt{\frac{M_{D_{2}}}{M_{H_{2}}}}$ Or, $\frac{v_{H_{2}}}{t} \times \frac{t}{v_{D_{2}}}=\frac{P_{H_{2}}}{P_{D_{2}}} \times \sqrt{\frac{M_{D_{2}}}{M_{H_{2}}}}$ $\frac{v_{H_{2}}}{t} \times \frac{t}{0.56 L}=2 \times \sqrt{\frac{4}{2}}$ or $\quad v_{H_{2}}=1.584 L=0.14 g$ of $H_{2}$ $\therefore$ Weight of the gases in 2 nd bulb $=0.10 g\left(D_{2}\right)+0.14 g\left(H_{2}\right)=0.24 g$ Hence, in the 2 nd bulb, $\%$ of $D_{2}$ by weight $=\frac{0.10}{0.24} \times 100=41.67 \%$ $\%$ of $H_{2}$ by weight $=100-41.67=58.33 \%$ Q. An $L P G$ (Liquefied Petroleum Gas) cylinder weights 14.8 kg when empty. When full, it weighs 29.0 kg and shows a pressure of 2.5 atm. In the course of use at $27^{\circ} \mathrm{C},$ the mass of the full cylinder is reduced to 23.2 kg. Find out the volume of the conditions, and the final pressure inside the cylinder. Assume $L P G$ to be $n$ -butane with normal boiling point of $0^{\circ} \mathrm{C}$ Ans. Weight of $L P G$ originally present $=29.0-14.8$ $=14.2 \mathrm{kg}$ Pressure $=2.5$ atm Weight of LPG present after use $=23.2-14.8$ $=8.4 \mathrm{kg}$ since volume of the cylinder is constant, applying since volume of the cylinder is constant, applying $p V=n R T$ $\frac{p_{1}}{p_{2}}=\frac{n_{1}}{n_{2}}=\frac{W_{1} / M}{W_{2} / M}=\frac{W_{1}}{W_{2}}$ $\frac{2.5}{p_{2}}=\frac{14.2}{8.4}$ $p_{2}=\frac{2.5 \times 8.4}{14.2}=1.48 \mathrm{atm}$ or Weight of used gas $=14.2-8.4=5.8 \mathrm{kg}$ Moles of gas $=\frac{5.8 \times 10^{3}}{58}=100$ moles Normal conditions $p=1$ atm; $T=273+27=300 K$ Volume of 100 moles of $L P G$ at 1 atm and $300 \mathrm{K}$ $V=\frac{n R T}{p}=\frac{100 \times 0.082 \times 300}{1}=2460$ litre $V=2.460 \mathrm{m}^{3}$ Q. The compressibility factor for one mole of a vander Waals gas at $0^{\circ} \mathrm{C}$ and 100 atm pressure is found to be $0.5 .$ Assume that the volume of gas molecule is negligible, calculate yander Ans. Compressibility factor $Z=\frac{p V}{R T} ; \quad 0.5=\frac{100 \times V}{0.082 \times 273}$ $\therefore V=\frac{0.5 \times 0.082 \times 273}{100}=0.1119 L$ If volume of molecules is negligible i.e. $b$ is negligible vander Waals' equation: $\left(p+\frac{a}{V^{2}}\right)(\mathrm{V})=R T$ or $p V=R T-a / V$ or $a=R T V-P V^{2}$ $=(0.082 \times 273 \times 0.1119)-(100 \times 0.1119 \times 0.1119)$ $=1.253 \mathrm{atm} L^{2} \mathrm{mol}^{-2}$ states of matter class 11 numericals pdf states of matter class 11 questions states of matter class 11 important questions states of matter class 11 questions and answers important questions of states of matter
CommonCrawl
Convert meters per second [m/s] to kilometers per hour [km/h] and vice-versa Online converter Convert any value from / to meters per second [m/s] to kilometers per hours [km/h]. Fill one of the following fields, values will be converted and updated automatically. meters per second [m/s] kilometers per hours [km/h] Convert m/s to km/h Convert km/h to m/s Assume that \( v_{(m.s^{-1})} \) is the velocity expressed in \( m.s^{-1} \), i.e. the distance travelled during one second. To get the distance travelled during one minute, the previous value must be multiplied by 60. To get the distance in meters travelled during one hour, \( v_{(m.s^{-1})}\) must be multiplied by 60 x 60 = 3600. The result is a velocity expressed in \( m.h^{-1} \). To convert this result in kilometers per hour, we now have to divide the previous velocity (expressed in \( m.h^{-1} \)) by 1000 since one kilometer is equal to 1000 meters. The conversion can be done thanks to the following formula: $$ v_{ (km.h^{-1}) } = \frac {3600}{1000}.v_{(m.s^{-1})} = 3.6 \times v_{(m.s^{-1})}$$ and vice-versa: $$ v_{ (m.s^{-1}) } = \frac {1000}{3600}.v_{(km.h^{-1})} = \frac { v_{(km.h^{-1})} } {3.6 } $$ Convert binary, decimal and hexadecimal Convert from binary to decimal and vice-versa Convert newton-metre [N.m] to kilogramme-centimeter [Kg.cm] and vice-versa Convert newton-metre [N.m] to millinewton-metre [mN.m] and vice-versa Convert inches [in] to centimeters [cm] and vice-versa Convert meters [m] to millimeters [mm] and vice-versa Convert radians per second [rad/s] to meters per second [m/s] and vice-versa Convert revolutions per minute [rpm] to radians per second [rad/s] and vice-versa Convert revolutions per second [rps] to radians per second [rad/s] and vice-versa Convert radians [rad] to degrees [°] and vice-versa Convert revolutions per minute [rpm] to kilometers per hour [km/h] and vice-versa Convert revolutions per minute [rpm] to meters per second [m/s] and vice-versa
CommonCrawl
nature communications A very large-scale microelectrode array for cellular-resolution electrophysiology David Tsai1, Daniel Sawyer1, Adrian Bradd1, Rafael Yuste2 & Kenneth L. Shepard3 Nature Communications volume 8, Article number: 1802 (2017) Cite this article Techniques and instrumentation An Addendum to this article was published on 24 October 2018 In traditional electrophysiology, spatially inefficient electronics and the need for tissue-to-electrode proximity defy non-invasive interfaces at scales of more than a thousand low noise, simultaneously recording channels. Using compressed sensing concepts and silicon complementary metal-oxide-semiconductors (CMOS), we demonstrate a platform with 65,536 simultaneously recording and stimulating electrodes in which the per-electrode electronics consume an area of 25.5 μm by 25.5 μm. Application of this platform to mouse retinal studies is achieved with a high-performance processing pipeline with a 1 GB/s data rate. The platform records from 65,536 electrodes concurrently with a ~10 µV r.m.s. noise; senses spikes from more than 34,000 electrodes when recording across the entire retina; automatically sorts and classifies greater than 1700 neurons following visual stimulation; and stimulates individual neurons using any number of the 65,536 electrodes while observing spikes over the entire retina. The approaches developed here are applicable to other electrophysiological systems and electrode configurations. The ability to observe and manipulate, with single-cell precision, the activities of large neuronal populations is an essential step toward better understanding of the nervous system1. It is now possible to observe activities in large neuronal populations through calcium imaging2,3,4. However, intracellular calcium concentration is influenced by many simultaneously active and mutually interacting mechanisms, including voltage- and ligand-gated calcium channels, intracellular stores, and chelation by proteins and by the fluorescent calcium indicator5. Many of these are not under direct control of the experimentalist and operate over a time scale substantially longer than that of action potentials, thus complicating interpretation. In particular, inferring action potentials from calcium signals require non-trivial calibration6, particularly if one is interested in precise spike timing and not just the number of spikes. Extracellular electrophysiology, in contrast, enables direct read out of spikes. Recent progress has substantially increased the number of simultaneously recordable neurons7. However, the observable neurons, ranging from few tens to approximately a thousand8,9,10, remains small relative to the number of neurons in the brain. This is primarily due to trade-offs between several competing requirements: adequate recording signal-to-noise ratio (SNR), minimal biological invasiveness, at-scale recording, and, ideally, ability to stimulate the recorded neurons with precision. Because of spatial requirements, having a complete amplifier chain and digitizer for each electrode, with sufficient noise performance, bandwidth and dynamic range, is infeasible for at-scale electrophysiology. Consequently, time-division multiplexing has been the mainstay for increasing density and channel count9,11,12, by sharing one signal path between several inputs. Traditional time-division multiplexing constitutes a sampled system. Since the input channels are sequentially scanned (sampled) over time, per-channel low-pass filters are required to reject frequencies above half the sampling rate (Supplementary Fig. 1a). Failure to do so causes aliasing of frequencies above half the per-channel sampling rate, thus degrading the SNR. Moreover, the more channels an inadequately filtered system has, the greater the SNR degradation. The physical dimension of these filters dictates the density limit of large-scale electrophysiology. With an ideal noise ceiling of approximately 10 µV rms13, a parsimonious, multiplexed recorder, consisting of the electrode, an antialiasing filter and an amplifier, cannot be smaller than ~10,000 µm2 to ensure adequate noise performance, due to the capacitance density available in today's microelectronic technologies (Supplementary Fig. 1b; also note Supplementary Fig. 1 caption). With mammalian neuronal soma ≤25 µm in diameter, this 20-to-1 dimensional mismatch fundamentally restricts the scalability of current approaches in electrophysiology. Due to this noise-verse-density trade-off, today's large-scale neural recorders have either low noise but low simultaneously recording channel count8,14 or high channel count but high noise11,15,16. Here we present a large-scale, high-density and low noise electrophysiological recording and stimulation platform based on CMOS electronics and an acquisition paradigm that negates the requirement for per-channel antialiasing filters, thereby overcoming scaling limitations faced by existing systems. This allows us to maintain 10 µV rms recording noise with per-channel electronics for 65,536 channels at a 25.5-µm electrode pitch, avoiding the common trade-off between density, channel count and noise8,11,14,15,16,17,18,19. We then demonstrate the platform's ability to record from 10 s of thousands of neurons simultaneously. In conjunction with visual stimuli and the platform's high-performance computing infrastructure, the system could functionally classify more than 1700 neurons in the retina automatically, including identification of rare cell types. Finally, combining cellular-resolution microstimulators with dense recordings across the entire retina allowed us to re-examine how electrical stimulation recruits neurons in a network, and importantly, how focal activation could be achieved. Compressed sensing-inspired electrophysiology We constructed a 65,536-channel, multiplexed, extracellular electrophysiology system consisting of a recording and stimulation array based on a custom integrated circuit (IC), circuit-board-level analog and digital circuits and custom software libraries (Fig. 1a). Traditional implementations of such multiplexed systems incorporate per-channel low-pass filters to prevent noise aliasing (Supplementary Fig. 1a). Our approach (Fig. 1b, Supplementary Fig. 1b) does not use these filters, but instead, avoids aliasing using concepts from compressed sensing20,21. This is made possible by noting several characteristics of extracellular recordings and thermal noise aliasing. Platform for dense, large-scale electrophysiology. a Overview of acquisition paradigm based on compressed sensing. b Sparse sampling by the ADC, followed by digital reconstruction and removal of the spectral contribution by the aliased thermal noise. c A 65,536-channel recording and microstimulation grid based on CMOS-integrated circuits (IC). d Packaged IC. e Scanning electron micrograph of the 14 × 14 µm electrodes, spaced 25.5 µm apart. f Illustration of the experimental platform consisting of: the IC; supporting circuits; data processing pipeline containing CPUs and GPUs; and optics for near-infrared visualization and visual pattern delivery. ADC, analog-to-digital-converter; MUX, multiplexer First, electrophysiological recording is dominated by thermal noise at frequencies above a few kHz. It is a stationary process with a Gaussian time-domain amplitude distribution and uniform frequency distribution, up to the recording channel's bandwidth. Therefore, this thermal noise can be described, and generated computationally, with only two parameters, its variance and bandwidth. Second, thermal noise aliasing offers two averaging properties, which greatly simplify the reconstruction, and subsequent removal, of its spectral contribution in the under-sampled, per-channel data. The power of thermal noise is approximately uniform. As the thermal noise powers are folded down into the first Nyquist zone (Supplementary Fig. 3b) during aliasing, the slight variations in power between frequencies are averaged out. This allows us to compute the power contributed by aliasing using the expected average thermal noise power, multiplied by the number of folded Nyquist zones. Similarly, the spectral angles of thermal noise have a uniform distribution with zero mean. The angle variation between frequencies converge to zero as the aliased thermal noise are folded down into the first Nyquist zone. Taking advantage of the foregoing characteristics, we can digitally reconstruct the spectral contributions originating from the under-sampled thermal noise, then remove them from the sparsely-sampled channel data, thereby minimizing the effects of aliasing, without using per-channel anti-aliasing filters. This acquisition strategy allows us to pack 65,536 channels (Fig. 1c, d) into an area of 42.6 mm2, with 25.5 µm spacing between channels (Fig. 1e), using CMOS IC processes. Each channel can be sampled at 10 kHz during full-grid recordings, with higher sampling rates achievable by reducing the recording area. Importantly, this platform does not have the noise-verses-density trade-off of classical large-scale electrophysiology. We constructed an electrophysiological platform based on this acquisition paradigm. It consists of the aforementioned 65,536-channel CMOS IC, custom circuit boards with filters and field programmable gate arrays (FPGAs), CPUs, graphical processing units (GPUs), and an OLED display for generating visual patterns (Fig. 1f). Achieving 65,536-channel recordings with minimal noise To test the recording performance, we applied test signals through a pair silver-silver chloride electrodes into the recording chamber filled with physiological saline (Fig. 2a). The median SNR across the array was 54.9 with 200 µV test signals (Fig. 2b, c). Our system uses a capacitive recording interface15,22,23, formed by a 6-nm thick HfO2 dielectric deposited above each electrode and a pseudo-resistor constructed from a p-type MOSFET. The corner frequency is user-tuneable, and is nominally set to 100 Hz (Fig. 2d). System performance characterization. a Test setup. Recordings from the 65,536-channel grid are compared against patch clamp recordings. Inset photo: pipette above the grid. b SNR variation across the array, measured at every sixteenth row and column. The test signal was a bath-applied 1-kHz, 200 µV sine wave. c SNR distribution for all electrodes in the array. d Frequency response of the capacitive recording front-end (mean ± SEM, 8 electrodes). e, f Comparison of per-channel data, before (green) and after (orange) removal of aliased thermal noise, for a recording with 1 kHz signal e and a baseline recording f. Insets in e and f: PSD plots of time-domain data in e and f, respectively. The dc component has been removed to better illustrate the linear reduction in noise floor across frequencies. g The system's input referred noise was ~10 µV rms over 100–10k Hz, in saline. h Comparison to patch clamp amplifier recordings. The test signal was a 1 kHz, 100 µV peak-to-peak sine wave. Both traces have been bandpass filtered between 300–3k Hz for clarity. i Recordings before and after removing aliased thermal noise. Inset: expanded view of segment without and with spike, respectively. j Overlaid traces for 99 spike segments before and after removing aliased noise. Traces in i and j are unfiltered Each channel is typically observed at a rate well below the channel bandwidth. To minimize thermal noise aliasing, the spectral contribution of the under-sampled thermal noise is computed, then removed, from the channel data (Fig. 1b, steps 3–4). Figure 2e compares, for a 1 kHz signal applied in the saline-filled recording bath, the channel data before (green) and after (orange) removal of the aliased thermal noise. The SNR improvement is also apparent in the spectral domain (Fig. 2e inset). The noise floor was reduced uniformly across frequencies. Fig. 2f illustrates the effects of our processing strategy on a segment of baseline recording without test signal. Here, the noise was reduced from 21.7 µV rms (green) to 10.02 µV rms (orange) over the 100–3 kHz bandwidth following signal processing. When recording in physiological saline, the 65k-electrode grid had ~10 µV rms input referred noise over the 100–5 kHz bandwidth, encompassing the spike frequency range of 300–3 kHz (Fig. 2g). Finally, the per-channel signal from the 65k-electrode grid closely resembled those from patch clamp recordings—the gold standard in electrophysiology–performed adjacent to the test electrode (Fig. 2h). For clarity, both traces were bandpass filtered between 300–3 kHz. The low-frequency fluctuation was due to noise pick-up by the wires connecting the signal generator to the bath electrodes. To assess the performance of the de-noising procedure on biological recordings, Fig. 2i compares a recording before (green) and after (orange) removal of aliased noise. In particular, the inset plots in Fig. 2i show that the procedure reduced noise fluctuations without degrading the action potential waveforms. This is further illustrated in Fig. 2j for 99 spikes recorded from one electrode. The average variance of these raw waveforms is 8307.6. After removing the aliasing-induced spurious fluctuations, the average variance of the processed waveforms is 3179.0. While the amplitudes of signal and noise are both reduced, the SNR is improved by reducing the spectral contribution of aliased noise from the first Nyquist zone (Supplementary Fig. 3b verses Supplementary Fig. 3c). Collectively, these results demonstrate the ability of this system to acquire, with high SNR, weak signals having amplitudes typical of mammalian extracellular recording, and to do so at spatial resolutions down to 25.5 µm, while simultaneously providing observable spatial coverage of 42.6 mm2 with 65,536 electrodes. Simultaneous recordings from more than 34,000 electrodes We tested the ability of the platform to carry out at-scale, cellular-resolution recordings with single-spike sensitivity by placing a piece of mouse retina, retinal ganglion cell (RGC) side down, on the recording grid (Fig. 3a). We began by observing the neurons' spontaneous activities under scotopic conditions (Fig. 3b). Spikes were readily apparent, with 34,187 electrodes picking up spiking activities. This exceeded the best existing attempts at across-retina spike recordings by an order of magnitude in channel count24 and the best calcium imaging efforts in the retina by approximately two orders of magnitude25,26. Large-scale recordings in the retina. a Photo of setup with mouse retina. b Spontaneous spiking activities recorded simultaneously from 34,187 electrodes over 12 s, at 10 kHz per electrode. Scale bar, 1275 µm. c Unit activities were observed concurrently on several adjacent electrodes spaced 25.5 µm apart. These spikes are single-trial waveforms. Sorted spike waveforms d and spike time raster e for one electrode in b. There are 33 and 47 spikes for the first and second neuron, respectively. f Inter-spike interval (ISI) plot for the two neurons in d. Inset: zoomed-in view of the first 20 ms Spikes from each neuron were observed on multiple adjacent electrodes (Fig. 3c, Supplementary Fig. 4e) and each electrode acquired spikes from more than one neuron (Fig. 3d, Supplementary Fig. 4c). Spike sorting accuracy is substantially improved by combining spatially dependent waveforms from adjacent electrodes27 (Supplementary Fig. 4), made possible by the dense 25.5 µm electrode pitch. Figure 3d, e illustrate the sorted spike waveforms and rasters, respectively, from one electrode in Fig. 3b. Note the lack of inter-spike intervals (ISIs) less than or equal to 2.5 ms (Fig. 3f), the typical absolute refractory period of mammalian neurons. The presence of such intervals would be indicative of incorrect clustering. A similar absence of ISI violation was also observed when we sorted electrodes with an order of magnitude more events (Supplementary Fig. 4a–d). The high electrode density also allowed us to triangulate the putative location of each observed neuron on the basis of small changes in waveform amplitude over space (Fig. 4a). Functional classification of neurons using automated processing pipeline. The stimuli were 1-s, 1122-µm diameter, light spots. a Data from a sensing area of 1377 × 1275 µm were automatically analysed and, upon satisfying appropriate response characteristics, classified into one of six functional classes. Scale bar, 225 µm. b–g Raster plot for a representative ON transient RGC b, ON sustained RGC c, OFF transient RGC d, OFF sustained RGC e, ON–OFF RGC f, and suppression-by-contrast (SbC) RGC g, to five repetitions of one-second light flash. The peri-stimulus time histogram (10 ms bins) beneath each raster shows the normalized firing rate for all detected neurons in each class. The n-numbers denote the neurons considered in each histogram. All neurons were recorded simultaneously. h Distribution of functional classes for the identified neurons in a Functional classification of neurons There have been intense debates over the number of functional types of RGCs, the retina's output neurons, and hence the number of information channels connecting the eyes to the brain. In the mouse, it has been variously estimated to be: >12 by traditional sparse electrophysiology28, ~12 types on morphological basis29, ≥16 by genetic markers30, and, most recently, ≥ 30 types with two-photon calcium imaging26. The crucial requirement for successful functional-type accounting is unbiased sampling, over large distances and at cellular granularity. Dense, high-channel-count recording systems are well suited for such applications. Importantly, electrophysiological recordings read out spikes—the transmission protocol between the retina and the brain—rather than somatic calcium fluxes, which are secondary to spike generation. As a proof of concept, in Fig. 4 we flashed 1122-µm diameter light spots of 1-s duration over the retina while simultaneously recording spikes. Using a high-throughput stream processing pipeline (Supplementary Fig. 5), we sorted and functionally classified the evoked spikes from 1750 neurons in response to the light stimuli. The RGCs were classified according to changes in spiking activities during a period spanning 3-s around the 1-s visual stimuli. Upon satisfying appropriate response characteristics (Methods section), the neurons were assigned one of several classical functional types31,32: ON transient, ON sustained, OFF transient, OFF sustained, ON–OFF, and SbC RGCs. Notably, large-scale recordings allowed us to routinely identify and record from the so-called suppression-by-contrast (SbC) RGCs, also known as uniformity detectors. These neurons were first described33 in 1967, but seldom studied electrophysiologically34, presumably due to rare encounters in low-channel-count recordings. Example spike rasters, for five stimulus repetitions, of each functional class are illustrated in Fig. 4b–g. The RGC types were homogeneous throughout the field of analyses (Fig. 4a), consistent with known RGC spatial distribution properties35. We further analysed the population distributions of the six RGC types (Fig. 4h). Approximately 14% of the neurons were the ON–OFF class, in agreement with anatomical accounting36. There was a slight excess of OFF-type neurons comparing to the ON-types, a consequence of some OFF neurons' smaller dendritic arbor and hence higher density37. More sophisticated light stimuli will permit further sub-division of RGC types. Nevertheless, these results illustrate the ability of the processing pipeline and algorithms to analyse the data accurately and automatically. These are important attributes for at-scale analytics. Simultaneous microstimulation and recording Electrical microstimulation has a long application history in neuroscience38,39. It provides a method for perturbing the neurons and/or network being studied. Furthermore, neural stimulation at scale may be useful in medical applications, as demonstrated by a 1500-electrode implantable photodiode array, which enabled blind patients to read and navigate40. With the exception of a recent design16, existing at-scale (>1000 channels) electrophysiological tools have either lacked microstimulation features11 or offered limited simultaneously operable stimulation sites (up to approximately three dozen) despite high electrode number8,19. Space saving achieved by removing the per-channel antialiasing filters allowed us to implement a stimulator within each recording site. The stimulators are individually programmable. Stimulus artifacts are reduced with two circuit features. First, routing-associated parasitic capacitance is minimized by integrating the stimulation circuit beneath each electrode. The charging and discharging of this capacitance during stimulation manifest as transient artifacts in the recordings. Second, the MOSFET pseudo-resistor in series with the electrode (Supplementary Fig. 2a) is disabled during and immediately after stimulation to quickly restore the first recording transistor's biasing voltage. Figure 5a illustrates, for ten trials, the electrically evoked spikes of a RGC following a single pulse. These spikes are easily distinguished from the artifacts. Further artifact suppression was achieved by subtracting recordings without neurons from those with neurons (Fig. 5b). The evoked spikes could be detected automatically in these post-processed data using the platform's stream processing pipeline (Fig. 5c). In this example, the neuron responded in 8 of 10 trials. To assess the reliability of electrical stimulation, we stimulated and calculated the response rate (over 10 trials) of 46 RGCs in two retinas using single 1.6 v pulses (Fig. 5d). More than half of these neurons responded to each trial, while the remaining neurons responded with ≥50% probability. Spatiotemporal effects of microstimulation revealed by large-scale electrophysiology. A grid of 2 × 2 electrodes was used throughout. Besides h and j, 1.6 v pulses were used. a Simultaneous stimulation and recording at one electrode. Superimposed traces from 10 trials. b Same data as a, after removing the stimulus artifacts. c Raster plot for the evoked spikes in b. d Response rate (10 trials) of 46 RGCs in two retinas. e Events detected within 3 ms of stimulus onset. The stimulation site is marked by the red arrow. Aggregated data from 10 repetitions. Scale bar, 1275 µm. f Latency of the evoked spikes increased with distance from the stimulation site. Each dot represents a spike (69 in total). The colors denote different neurons. Dotted line is linear fit (R 2 = 0.9541, p < 0.0001, F-test). g Time course of spike-triggered average (STA) stimulus for each (color-matched) neuron in f. h–j Spike-sorted activity maps of identified neurons. The number of distant neurons activated by electrical stimulation increased with stimulus strength. The colors indicate response rate over 10 trials. Dotted lines are approximate outline of the retina. Scale bar, 765 µm To ensure that these short-latency spikes were not spontaneous activities, we examined the quiescent firing rate of 22 neurons from a single retina (Supplementary Fig. 6a). The mean firing rate was 2.6 Hz. In contrast, when electrically stimulated, these same neurons spiked with a mean response rate of 74.1% within 3.0 ms of stimulus delivery, over 10 trials (Supplementary Fig. 6b). Therefore, the increased spiking probability following electrical stimulation was unlikely to be of spontaneous origin. Loss of focal activation by high-strength microstimulation An important goal of microstimulation, and indeed for neuronal manipulation in general, is achieving spatiotemporally precise activation. Some studies have found confined activation with single-neuron precision41,42, while others observed wide-spread neuronal activation39,43. This discrepancy could be a consequence of shortcomings in existing recording tools. First, the inability to record from every neuron, or nearly so, across sufficient area could lead one to incorrectly conclude that activation is spatially confined, because signals from the recruited neurons were not completely accounted for. Second, techniques with limited single-spike sensitivity, such as calcium reporters44,45, require large stimuli capable of eliciting multiple spikes to reach detectability threshold. Electric field size increases with stimulus amplitude, influencing a larger neuronal population, giving rise to the alternative, incorrect conclusion of wide-spread neuronal activation, caused by the use of excessive stimuli due to poor spike sensitivity. A key advantage of cellular-resolution, at-scale electrophysiology is the ability to simultaneously observe activities over the entire retina with single-spike sensitivity. We re-examined the spatial confinement of microstimulation in the retina while stimulating at one location (Fig. 5e). With moderate stimulus strength, we observed a 70% (7 out of 10 trials) response rate from one neuron at the stimulation site (Fig. 5i, red circle). A number of distant neurons were also recruited (i.e., responded in ≥50% of trials). The response latencies increased with distance from the stimulation location (Fig. 5f, g). The axon of RGCs converges at the optic nerve head near the central retina, where they exit the eye. The spatial distribution and response latency of these activated distant neurons were consistent with retrograde axonal stimulation46,47, as the axon from these neurons passed in close proximity to the simulation site. The number of distant neurons recruited by electrical stimulation was strongly dependent on the stimulus strength. Weak stimuli recruited one neuron at the stimulating electrode (i.e., the targeted neuron; Fig. 5h). The number of distant neurons elicited, as well as their response probability, increased as the stimulus strength increased (Fig. 5h–j). Therefore, focal activation depended critically on the stimulus strength. It should be sufficiently powerful to recruit the close proximity, target neuron, but should not be excessive, to avoid stimulating distant neurons with axons passing near the stimulating electrode. Traditional multichannel electrophysiology is limited in its ability to simultaneously realize low noise, dense and large-scale recordings, due to the need for per-channel antialiasing filters. We presented an acquisition paradigm that does not require these scalability-limiting elements. A platform based on this paradigm allowed us to record spiking activities in the mouse retina across more than 34,000 electrodes with high SNR. In conjunction with the platform's high-performance computing infrastructure, we were able to sort and functionally classify more than 1700 neurons following light stimulation. Finally, recording at cellular-resolution, across large area and with single-spike sensitivity, allowed us to examine the dynamics of microstimulation in greater spatiotemporal resolution than previously possible. Our acquisition paradigm is inspired by compressive sensing (CS), based on the central CS notion that, if we know something about the frequency contents of the signal being acquired, it may be possible to recover the signal without sampling at the classical Nyquist rate. In electrophysiology, we know how the aliased thermal noise is manifested in the under-sampled, per-channel data. Conventional CS approach would attempt recovery by some form of iterative, optimization algorithm, which is generally quite slow. In electrophysiology, we could instead exploit the statistical prior of thermal noise for computationally efficient recovery. Thus, while the implementations differ, the general concept is identical. There are further conceptual similarities. CS often uses irregular/random under-sampling to achieve incoherent aliasing. That is, the spectral power of the aliased content is evenly spread out across frequencies constituting the under-sampled data. This is notionally similar to our approach, where the aliased thermal noise is folded down into the first Nyquist zone (Supplementary Fig. 3b), averaging out variations in the original thermal noise power spectra and spectral angles. Several multichannel electrophysiological systems have recently reached simultaneously recording channel counts in the thousands11,16,23 or even up to 16,000 channels48, at the expense of noise performance. The noise in these tools are several times higher than traditional systems with at most a few hundred electrodes (≤10 µV rms verses 25–250 µV rms or more). High SNR is critical for spike sorting, where neurons are distinguished on the basis of minute differences in spike waveform. This is particularly relevant for the mammalian nervous system. For example, extracellular signals in the mouse retina generally do not exceed much more than 150 µV peak-to-peak, while reliable spike sorting requires at least 100 µV peak-to-peak signals under optimal SNR conditions49. Indeed, current high-channel-count implementations have used higher-than-typical spike detection thresholds24 (7.5 SD vs. ~4.0 SD) to avoid misinterpreting noise as spikes; have detected few neurons despite the large number of electrodes48 (126 neurons in 16,384 electrodes); or have identified putative events at locations that apparently did not correspond with neurite positions23. In general, because of noise, a substantial fraction of neurons may be unobservable when using these systems, potentially diminishing the benefits of high channel count and/or high electrode density. The systems designed by the Litke and Chichilinisky groups27,50, and the Roska and Hierlemann groups8 have achieved input referred noise as low as 5 µV rms and 2.4 µV rms, respectively. The higher SNR offers several advantages, including improved spike sorting performance and the ability to detect dendritic spikes. However, the superior noise performance limits the simultaneously recording channels to 1024 or less, due to the need for per-channel anti-aliasing filters in these classical multiplexed systems. Here we demonstrated the multichannel acquisition paradigm in a 65,536-channel ex vivo recording grid, fabricated using commercial CMOS-integrated circuit (IC) technology. The strategy is generalizable to any dense, high-channel count electrophysiological systems, including implantable, long-term in vivo recording tools. In these, multiplexing is important not only for density and channel count scaling, but also to reduce wiring, power consumption and heat dissipation. Indeed, the sampling paradigm will work for any big-data acquisition applications where the spectral and statistical characteristics of the high frequency components, above twice the per-channel observation rate and below the recording channel bandwidth, can be reasonably approximated. Another advantage of this data acquisition approach is that the signal processing steps (channel separation and aliased noise removal) are all implemented in the digital domain. The throughput of these procedures will improve with technological advancements in electronics, allowing the approach to continue scaling beyond the tens of thousands of simultaneously recording and stimulating channels presented here. Recording and stimulation architecture The architecture for our platform is summarized in Fig. 1a. Supplementary Fig. 2a shows the schematic overview for the recording and stimulating circuits. The platform is constructed from a combination of custom IC, circuit-board-level components, synthesized digital logic in field programmable gate arrays (FPGAs) and algorithms running on ×86 CPUs and NVIDIA CUDA processors. The IC (Supplementary Fig. 2b) contained 65,536 front-end elements, divided into 16 blocks of 4096 elements each. Each block is connected to a back-end circuit for additional amplification through a 4096:1 multiplexer. We bandpass filter the outputs from these back-ends to confine spectral content between 50 Hz and 40 MHz with a Sallen-Key filter, implemented on the printed circuit board, then digitize the resulting signals using 12-bit analog-to-digital converters (ADCs). The ADCs' data streams are captured by a FPGA and transferred to a computer. There are four FPGAs in the system, each handling the outputs of four ADCs. Each front-end element contains programmable registers to enable or disable voltage-based, electrical microstimulation, via the capacitive-coupled HfO2 dielectric interface. The system is powered by a 6 V supply, and uses approximately 24.7 W when in operation. The power consumption is dominated by the four Xilinx FPGAs and, to a lesser extent, the board-level bandpass filters. The IC consumes less than 0.6 % (i.e., ≤148 mW) of the total power budget. The electrical stimulation strategy is based on our previous architecture51,52. Thirty-two elements are configured at a time during the programming phase, which takes 60 ns. Only elements needing change of stimulation status require programming. Asserting a global digital signal triggers stimulus delivery on all electrodes programmed to do so. User programmable voltage stimuli ranging between 0 to 2.0 volt, are generated by a voltage source on the circuit board (Fig. 1f), fed into the IC, shared by all stimulating electrodes, and capacitive-coupled to the neurons via the electrodes' HfO2 dielectric interface. The MOSFET pseudo-resistor is turned off during stimulation to minimize source resistance. For all electrical stimulation results presented here, we delivered the stimuli with four neighboring electrodes arranged in a 2 × 2 pattern, we found the larger electric field so generated to more consistently elicit spikes in the mouse retina than that from one electrode. The stimulus threshold is defined as the voltage required to elicit electrically evoked responses within 3 ms of stimulus onset in 50% of trials (out of 10), at the 2 × 2 stimulating electrodes. Integrated circuit fabrication and post-processing The top metal layer of the CMOS IC serves as the base material for the sensing electrodes. This is achieved by etching away the foundry-deposited passivation layers (polyimide, silicon nitride and silicon dioxide) by inductively coupled plasma/reactive ion etching (ICP/RIE) using a mixture of SF6 and O2 plasma. We restrict etching to within the sensing region by protecting all other area with a ~16 µm layer of AZ-4620 photoresist, patterned with standard UV photolithography. The naturally occurring aluminum oxide on the top metal is stripped by ion milling. Next, we deposit 6 nm of HfO2, a high-K dielectric, by atomic layer deposition (ALD) at 150 °C on top of the metal. This serves two purposes. First, it creates a capacitive sensing and stimulation interface; and second, it provides a passivation layer for the underlying aluminum. This HfO2 layer provides a capacitance of 5.8 pF over the 14 × 14 µm electrode. The capacitance is ascertained by building test structures (Supplementary Fig. 2c) consisting of a metal-HfO2-metal stack on a SiO2 substrate, followed by measurements with a semiconductor parameter analyser (Agilent B1500). A 160-nm-thick film of conductive polymer10 is spun over the die surface to reduce the electrode-to-electrolyte interfacial impedance53 (between the conformal ALD HfO2 layer and saline). This is followed by a 220-nm PMMA A4 (MicroChem) barrier film. The PEDOT:PSS+ PMMA stack is patterned using UV lithography with Shipley S1813 photoresist, followed by O2 ICP etching, such that only the electrodes are covered by PEDOT:PSS. Finally, the remaining PMMA is stripped with PG Remover (MicroChem). Each post-processed die is attached to a custom ball grid array (BGA) with thermally conductive epoxy, wire-bonded, and then encapsulated with medical grade epoxy (OG-116-31, Epoxy Technology, Inc.). In the final step, we attach a polycarbonate ring around the IC using Sylgard 184 (Dow Corning) to serve as the perfusion chamber. Sparse sampling and data recovery Multiplexing causes each channel to be observed at a rate (f visit), through the multiplexer, considerably lower than the channel bandwidth (f BW). Unless the content spanning f visit/2 … f BW is removed from the per-channel data, aliasing occurs. The problem of per-channel data recovery is thus two-fold. First, the channel data must be extracted from the ADC data stream (Fig. 1b, step 2); and second, the spectral contribution of contents in f visit/2 … f BW has to be computed (Fig. 1b, step 3) and removed from the channel data (Fig. 1b, step 4). The first task, per-channel data extraction, is achieved by keeping a history of the scanned channels during recording. In this manner, each sampled value from the ADC can be assigned to the channel from which it originates by examining the history at the corresponding time point. The goals of the second task are to preserve the spectral contents of neural signal and to prevent aliasing of contents in f visit/2 … f BW, for data sampled at only fvisit (Supplementary Fig. 3b), with f visit ≪ f BW. We begin by setting the multiplexers' per-channel visit rate (f visit) to be sufficiently high, such that the spike bandwidth (300–3k Hz) is entirely encompassed by f visit/2 and that the range f visit/2 … f BW is dominated by thermal noise. We typically set f visit to 10 kHz to achieve these requirements. Several statistical and spectral characteristics of thermal noise make its aliased image amenable to reconstruction in the frequency domain. This thermal noise, which comes from the electrodes and from the amplifier transistors, is a stationary random process, with a flat spectrum and a Gaussian time-domain amplitude distribution54 of zero mean and variance σ2. The probability density function for such a process is $$N(x|\sigma ^2) = \frac{1}{{\sqrt {2\pi \sigma ^2} }}e^{ - \frac{{x^2}}{{2\sigma ^2}}}$$ We can easily determine every channel's σ2 for thermal noise calculation by recording each channel without multiplexer interruption (i.e. conventional sampling) at full system bandwidth, thereby completely specifying the thermal noise characteristics of the channel up to f BW. With the noise variance σ 2 and bandwidth f BW now known for every recording channel, we computationally construct the thermal noise n i for each channel i. As the under-sampled thermal noise is folded down into the First Nyquist zone, in the per-channel data, fluctuations in power and spectral angle, from frequency to frequency, are averaged out (Supplementary Fig. 3b). We can compute the power contributed by aliasing using the expected average thermal noise power, multiplied by the number of folded Nyquist zones. Similarly, the spectral angles converge to zero in the aliased version of the thermal noise. We construct vectors in the frequency space to represent the aliased thermal noise, subtracting these from the per-channel data, thereby reversing the effects of aliasing. The effects of thermal noise aliasing, between f visit/2 … f BW, in the per-channel data (Supplementary Fig. 3b) can be readily reproduced by decimating the generated noise n i to a lower rate, f visit. We denote this aliased sequence a i : $$a_i:{\mathrm{decimate}}(n_i,f_{{\mathrm{visit}}})$$ Next we construct another sequence b i , a decimated version of n i without aliasing. This is accomplished by first low-pass filtering n i at f visit/2, followed by decimation to the new rate f visit: $$b_i:{\mathrm{decimate}}\left( {{\mathrm{lowpass}}\left( {n_i,f_{\mathrm{visit}}/2} \right),f_{\mathrm{visit}}} \right)$$ The power contributed by the aliased thermal noise at each frequency, for a system with bandwidth f BW but sampled at only f visit, is, therefore, the power difference between the deliberately aliased sequence a i and the anti-aliased sequence b i : $${\mathrm{P}}_i = \left| {{\cal F}(a_i)} \right| - \left| {{\cal F}(b_i)} \right|$$ where \({\cal F}\) denotes Fourier transform. By removing the contribution of P i in the per-channel data, we avoid aliasing. Because thermal noise is a stochastic process, for any finite-length segment there will be slight fluctuations in power from frequency to frequency, and no two finite-length segments are exactly identical. These uncertainties are minimized with increased length for n i , and by computing P i from the averaged power, which converges to the true value as the number of analysed frequencies increases: $${\mathrm{P}}_i = {\mathrm{mean}}\left( {\left| {{\cal F}\left( {a_i} \right)} \right|} \right) - {\mathrm{mean}}\left( {\left| {{\cal F}\left( {b_i} \right)} \right|} \right)$$ We then construct a set of vectors describing the aliased contents in the frequency domain: $$\overrightarrow V _\iota = {\mathrm{P}}_ie^{j\,{\mathrm{arg}}\left( {{\cal F}\left( {d_i} \right)} \right)}$$ In the last step, we remove these aliased contents \(\overrightarrow {V_i} \) from the per-channel data \(d_i\). In doing so, we recover the data \(e_i\) without aliasing (Supplementary Fig. 3c): $$e_i = {\cal F}^{ - 1}({\cal F}\left( {d_i} \right) - \overrightarrow {V_i} )$$ In practice, we perform the thermal noise parameter estimation procedure separately in physiological saline prior to the biological experiments with 50–100 ms recordings at full sampling rate. This can be computed for 16 pixels in parallel, taking advantage of the 16 parallel read-outs on the IC. The noise parameters are saved for each pixel and reused in subsequent biological experiments. The four FPGAs, each collecting digitized data from four ADCs, are connected to a high-performance computer with separate USB3 links (Supplementary Fig. 5), with a combined transfer capacity of approximately 1 GB/s. Low-level drivers and custom libraries store the data to RAID0 hard drives and arbitrate interactions with near real-time processing algorithms written in C++, running on Intel ×86 CPUs (Xeon E5-2623 3 GHz) and NVIDIA GPUs (Quadro K5200). Characterization of acquisition paradigm The perfusion well is filled with PBS at physiological concentration. We generate 1-kHz sine waves from a function generator (AFG3102C, Tektronix) and attenuate the signal amplitude down to 100–200 µV peak-to-peak. The test signals are applied in the PBS bath through a pair of large (hence low impedance) Ag-AgCl electrodes. The recording SNR is measured by applying a 1-kHz sine wave into the saline bath, directly above the test electrode. After bandpass filtering the data between 300 and 3 kHz, we then compare its variance against a similarly bandpass filtered quiescent recording with a grounded bath. We determine the corner frequency of the high-pass filter at each electrode by measuring the amplitude attenuation of a bath-applied sine wave, at different frequencies. We also compare the performance of our system to that of conventional, low-noise patch clamp recordings. A ~950-kΩ, PBS-filled borosilicate glass pipette, connected to a commercial patch clamp recording setup (MultiClamp 700B, Digidata 1550, pClamp 10, all Molecular Devices), is brought within 50 µm of the test electrode in the 65,536-electrde grid using a micromanipulator (MP-285, Sutter Instruments). The digitally recovered data from the test electrode within the CMOS recording grid is compared against the patch clamp amplifier's recordings. Both signals are bandpass filtered between 300–3k Hz. Mouse retina preparation WT mice >P40, of either sex, are dark adapted for one hour and deeply anaesthetized with isoflurane in O2. Following euthanasia, the eyes are rapidly enucleated under dim red light then placed in oxygenated Ames' medium (Sigma Aldrich) with 1.9 g/L of NaHCO3 and equilibrated with 95% O2/5% CO2, at room temperature. Under a near-infrared illuminated dissection microscope we hemisect the eyes, remove the anterior chamber, the vitreous and the posterior eye cup, then place the isolated retina in equilibrated Ames' medium at room temperature, in darkness. For recordings, an intact retina is flattened by several small incisions around the periphery, transferred onto a transparent dialysis membrane, then placed retinal ganglion cell side down, on top of the 65,536-channel CMOS recording / stimulation grid. A small, custom-made platinum harp, with (Supplementary Fig. 7) or without nylon threads, is placed over the membrane to maintain retina-to-electrode contact. The retina is kept alive by perfusing with equilibrated Ames' medium, heated to 33–35 °C, at a rate of ~4.5 mL/min. We allow at least 30-minutes recovery in the warm solution before recordings. All experiments are performed in the dark. Visualization of the retina under a fixed-stage upright microscope (Nikon FN1) is achieved with near-infrared illumination (≥850 nm) and an IR-sensitive CCD camera. Visual stimuli are generated on an OLED display with built-in digital signal processor and memory, then projected onto the retina via custom optics attached to the modified epi-fluorescent light path of a fixed-stage microscope. Visual patterns are triggered via a TTL digital input. The light spot stimuli are 1-s in duration, of 1122 µm diameter, white on black background at 100 % contrast, with intensity of 11.4 × 1011 photons/s/cm2. Eight mouse retinae were used for the biological results presented here. After per-channel data separation and noise removal (Fig. 1b), the data are bandpass filtered between 100 Hz and 3 kHz. Putative spikes are detected by threshold crossing over 4.5 standard deviation of mean. We use an event window of 0.7 ms and 1.0 ms, before and after, the spike peak, respectively. To facilitate spike sorting, whenever an event is detected, we also collect the waveforms from the electrode's eight adjacent neighbors over the corresponding timeframe. These waveforms are concatenated and saved to a database, mapping from electrode address to event data (peak time, waveform data and recording sweep ID). We compute the principal components for each waveform by singular value decomposition, then sort the waveforms using the first four scores by expectation maximization (EM) with Gaussian mixture model. We repeat the EM procedure ten times to avoid suboptimal, local-maxima solutions. The repetitions are performed concurrently using parallel CPUs/GPUs to reduce run time. The clustering with the highest fitness metric, namely, maximal between-cluster separation, minimal within-cluster spread, lack of singleton clusters and zero inter-spike interval violation, are deemed the correct/best solution. Conventional EM algorithms require a priori the number of clusters, an impractical requirement for at-scale spike sorting. We implement automatic cluster number detection using the foregoing fitness metric. Specifically, the number is increased incrementally from a minimum of two, up to maximum of 10. The lowest cluster number without inter-spike interval violation and having the highest, or equal highest, fitness metric is used. These procedures are implemented on parallel hardware to reduce run time. All sorted spikes are saved to a database, mapping from electrode address to a list of waveforms, their associated spike time, sweep ID and cluster assignment. To classify RGCs into functional types31,32, we flash a 1-s light spot over the region of interest in the retina. The recording duration is three seconds, for 1-s pre-stimulus and 1-s post-stimulus periods. We divide the 3-s recording interval into six equal segments of 500 ms each, numbered 1–6. ON-type neurons spike predominantly in segments 3 and 4, while OFF-type neurons spike predominantly in segments 5 and 6. Neurons with high segment 3 or segment 5 rates relative to segments 4 and 6 are designated as "Transient", otherwise they are designated as "Sustained." ON–OFF RGCs are distinguished by simultaneously strong responses in segments 3 and 5, and low baseline spiking rates. SbC RGCs are distinguished by their high baseline rate, with little or no spikes in segments 3 and 5. Units with ambiguous spiking profile, without the foregoing characteristics, are not assigned a functional class. Units with low spike rate (≤2 Hz) during the 3-s recording period are also not functionally classified, because the low spike counts preclude accurate classification. The pseudocode for the functional classification procedure can be found in Supplementary Note 1. In Fig. 5, the spike-triggered average (STA) stimulus is defined as the average stimulus preceding a spike from a neuron. Specifically, it is the sum of stimuli (voltage impulses through the HfO2 dielectric) that preceded each spike, divided by the number of spikes. To create the spike latency verses distance plot (Fig. 5f), we begin by collapsing, for each electrode, the spike-sorted events over 10 stimulus repetitions into a time-invariant plot. From this plot, at each electrode, we select all spike-sorted unit(s), with at least 5 events within 5 ms following stimulus delivery (i.e., 5 successes out of 10 trials). The spike time of these units are then plotted as a function of distance from the stimulation site. The data sets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Alivisatos, A. P. et al. The brain activity map project and the challenge of functional connectomics. Neuron 74, 970–974 (2012). Yuste, R. & Katz, L. C. Control of postsynaptic calcium influx in developing neocortex by excitatory and inhibitory neurotransmitters. Neuron 6, 333–344 (1991). Ahrens, M. B., Orger, M. B., Robson, D. N., Li, J. M. & Keller, P. J. Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nat. Methods 10, 413–420 (2013). Prevedel, R. et al. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods 11, 727–730 (2014). Grienberger, C. & Konnerth, A. Imaging calcium in neurons. Neuron 73, 862–885 (2012). Yuste, R. & Konnerth, A. Imaging in neuroscience and development: a laboratory manual. (Cold Spring Harbor Laboratory Press, 2011). Buzsáki, G. et al. Tools for probing local circuits: High-density silicon probes combined with optogenetics. Neuron 86, 92–105 (2015). Ballini, M. et al. A 1024-Channel CMOS microelectrode array with 26,400 electrodes for recording and stimulation of electrogenic cells in vitro. IEEE J. Solid State Circuits 49, 2705–2719 (2015). Lopez, C. M. et al. An implantable 455-active-electrode 52-channel CMOS neural probe. IEEE J. Solid State Circuits 49, 248–261 (2013). Khodagholy, D. et al. NeuroGrid: recording action potentials from the surface of the brain. Nat. Neurosci. 18, 310–315 (2015). Berdondini, L. et al. Active pixel sensor array for high spatio-temporal resolution electrophysiological recordings from single cell to large scale neuronal networks. Lab. Chip 9, 2644–2651 (2009). Du, J., Blanche, T. J., Harrison, R. R., Lester, H. A. & Masmanidis, S. C. Multiplexed, high density electrophysiology with nanofabricated neural probes. PLoS ONE 6, e26204 (2011). Buzsáki, G., Anastassiou, C. A. & Koch, C. The origin of extracellular fields and currents - EEG, ECoG, LFP and spikes. Nat. Neurosci. 13, 407–420 (2012). Fiscella, M. et al. Recording from defined populations of retinal ganglion cells using a high-density CMOS-integrated microelectrode array with real-time switchable electrode selection. J. Neurosci. Methods 211, 103–113 (2012). Eversmann, B. et al. A 128x128 CMOS biosensor array for extracellular recording of neural activity. IEEE J. Solid State Circuits 38, 2306–2317 (2003). Bertotti, G. et al. in Biomedical Circuits and Systems Conference (BioCAS) (IEEE, Lausanne, 2014). Muller, R. et al. A minimally invasive 64-channel wireless uECOG implant. IEEE J Solid State Circuits 50, 344–359 (2015). Johnson, L. J. et al. A novel high electrode count spike recording array using an 81,920 pixel transimpedance amplifier-based imaging chip. J. Neurosci. Methods 205, 223–232 (2012). Yuan, X. et al. in Proceedings of the IEEE Symposium on VLSI Circuits 1–2 (Honolulu, HI, USA, 2016). Candes, E., Romberg, J. & Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 1207–1223 (2006). MathSciNet Article Google Scholar Aldroubi, A. & Gröchenig, K. Nonuniform sampling and reconstruction in shift-invariant spaces. SIAM Rev. 43, 585–620 (2001). ADS MathSciNet Article Google Scholar Viventi, J. et al. Flexible, foldable, actively multiplexed, high-density electrode array for mapping brain activity in vivo. Nat. Neurosci. 14, 1599–1605 (2011). Lambacher, A. et al. Identifying firing mammalian neurons in networks with high-resolution multi-transistor array (MTA). Appl. Phys. B 102, 1–11 (2011). Maccione, A. et al. Following the ontogeny of retinal waves: pan-retinal recordings of population dynamics in the neonatal mouse. J. Physiol. 592, 1545–1563 (2014). Briggman, K. L., Helmstaedter, M. & Denk, W. Wiring specificity in the direction-selectivity circuit of the retina. Nature 471, 183–188 (2011). Baden, T. et al. The functional diversity of retinal ganglion cells in the mouse. Nature 529, 345–350 (2016). Litke, A. M. et al. Large-scale imaging of retinal output activity. Nucl. Instrum. Methods Phys. Res. A 501, 298–307 (2003). Farrow, K. & Masland, R. H. Physiological clustering of visual channels in the mouse retina. J. Neurophysiol. 105, 1516–1530 (2011). Kong, J.-h, Fish, D. R., Rockhill, R. L. & Masland, R. H. Diversity of ganglion cells in the mouse retina: Unsupervised morphological classification and its limits. J. Comp. Neurol. 489, 293–310 (2005). Sümbul, U. et al. A genetic and computational approach to structurally classify neuronal types. Nat. Commun. 5, 3512 (2014). Wässle, H. Parallel processing in the mammalian retina. Nat. Rev. Neurosci. 5, 747–757 (2004). Zeck, G. M. & Masland, R. H. Spike train signatures of retinal ganglion cell types. Eur. J. Neurosci. 26, 367–380 (2007). Levick, W. Receptive fields and trigger features of ganglion cells in the visual streak of the rabbits retina. J. Physiol. 188, 285–307 (1967). Sivyer, B., Taylor, W. R. & Vaney, D. I. Uniformity detector retinal ganglion cells fire complex spikes and receive only light-evoked inhibition. Proc. Natl Acad. Sci. USA 107, 5628–5633 (2010). Rodieck, R. W. The First Steps in Seeing. (Sinauer Associates, Sunderland, MA, USA, 1998). Vaney, D. I., Sivyer, B. & Taylor, W. R. Direction selectivity in the retina: symmetry and asymmetry in structure and function. Nat. Rev. Neurosci. 13, 194–208 (2012). Ratliff, C. P., Borghuis, B. G., Kao, Y.-H., Sterling, P. & Balasubramanian, V. Retina is structured to process an excess of darkness in natural scenes. Proc. Natl Acad. Sci. USA 107, 17368–17373 (2010). Cohen, M. R. & Newsome, W. T. What electrical microstimulation has revealed about the neural basis of cognition. Curr. Opin. Neurobiol. 14, 169–177 (2004). Logothetis, N. K. et al. The effects of electrical microstimulation on cortical signal propagation. Nat. Neurosci. 13, 1283–1291 (2010). Zrenner, E. et al. Subretinal electronic chips allow blind patients to read letters and combine them to words. Proc. R. Soc. B 278, 489–1497 (2011). Sekirnjak, C. et al. Electrical stimulation of mammalian retinal ganglion cells with multielectrode arrays. J. Neurophysiol. 95, 3311–3327 (2006). Sekirnjak, C. et al. High-resolution electrical stimulation of primate retina for epiretinal implant design. J. Neurosci. 28, 4446–4456 (2008). Histed, M. H., Bonin, V. & Reid, R. C. Direct activation of sparse, distributed populations of cortical neurons by electrical microstimulation. Neuron 63, 508–522 (2009). Tian, L., Hires, S. A. & Looger, L. L. Imaging neuronal activity with genetically encoded calcium indicators. Cold Spring Harb. Protoc. 1012, 647–656 (2012). Harris, K. D., Quiroga, R. Q., Freeman, J. & Smith, S. L. Improving data quality in neuronal population recordings. Nat. Neurosci. 19, 1165–1174 (2016). Tehovnik, E. J., Tolias, A. S., Sultan, F., Slocum, W. M. & Logothetis, N. K. Direct and indirect activation of cortical neurons by electrical microstimulation. J. Neurophysiol. 96, 512–521 (2006). Fried, S. I., Lasker, A. C. W., Desai, N. J., Eddington, D. K. & Rizzo, J. F. Axonal sodium channel bands shape the response to electric stimulation in retinal ganglion cells. J. Neurophysiol. 101, 1972–1987 (2009). Menzler, J., Channappa, L. & Zeck, G. Rhythmic ganglion cell activity in bleached and blind adult mouse retinas. PLoS ONE 9, e106047 (2014). Tikidji-Hamburyan, A. et al. Retinal output changes qualitatively with every change in ambient illuminance. Nat. Neurosci. 18, 66–74 (2015). Li, P. H. et al. Anatomical identification of extracellularly recorded cells in large-scale multielectrode recordings. J. Neurosci. 35, 4663–4675 (2015). Lei, N. et al. High-resolution extracellular stimulation of dispersed hippocampal culture with high-density CMOS multielectrode array based on non-Faradaic electrodes. J. Neural. Eng. 8, 044003 (2011). Lei, N., Shepard, K. L., Watson, B. O., MacLean, J. N. & Yuste, R. in Digest of Technical Papers, International Solid State Circuits Conference (San Francisco, CA, USA, 2008). Samba, R., Herrmann, T. & Zeck, G. PEDOT–CNT coated electrodes stimulate retinal neurons at low voltage amplitudes and low charge densities. J. Neural. Eng. 12, 016014 (2015). Barry, J. R., Lee, E. A. & Messerschmitt, D. G. Digital Communication. (Kluwer Academic Publisher, Norwell, MA, USA, 2003). This work was supported in part by the U. S. Army Research Laboratory and the U. S. Army Research Office under Contract W911NF-12- 1-0594, by the Defense Advanced Research Project Agency (DARPA) under Contract N66001-17-C-4002, by the National Institutes of Health under Grants U01NS0099697 and U01NS0099726, and by the National Health and Medical Research Council of Australia CJ Martin Fellowship (APP1054058). Department of Electrical Engineering, Columbia University, New York, NY, 10027, USA David Tsai, Daniel Sawyer & Adrian Bradd Departments of Biological Sciences and Neuroscience, Columbia University, New York, NY, 10027, USA Rafael Yuste Departments of Electrical and Biomedical Engineering, Columbia University, New York, NY, 10027, USA Kenneth L. Shepard David Tsai Daniel Sawyer Adrian Bradd D.T. built the system with assistance from D.S. and A.B. D.T. carried out the biological experiments. D.T., D.S., A.B., R.Y. and K.S. contributed to the system design, experimental discussions, and wrote the manuscript. Correspondence to Kenneth L. Shepard. The authors declare no competing financial interests. Tsai, D., Sawyer, D., Bradd, A. et al. A very large-scale microelectrode array for cellular-resolution electrophysiology. Nat Commun 8, 1802 (2017). https://doi.org/10.1038/s41467-017-02009-x DOI: https://doi.org/10.1038/s41467-017-02009-x Neuromorphic electronics based on copying and pasting the brain Donhee Ham Hongkun Park Kinam Kim Nature Electronics (2021) A novel methodology to describe neuronal networks activity reveals spatiotemporal recruitment dynamics of synchronous bursting states Mallory Dazza Stephane Métens Samuel Bottani Journal of Computational Neuroscience (2021) Recent advances in neurotechnologies with broad potential for neuroscience research Abraham Vázquez-Guardado Yiyuan Yang John A. Rogers Nature Neuroscience (2020) Versatile live-cell activity analysis platform for characterization of neuronal dynamics at single-cell and network level Xinyue Yuan Manuel Schröter Urs Frey Capacitive technologies for highly controlled and personalized electrical stimulation by implantable biomedical systems Marco P. Soares dos Santos J. Coutinho Sandra I. Vieira Scientific Reports (2019) Editorial Values Statement Editors' Highlights Top 50 Articles Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Expected marks of third team in a tournament of $7$ teams? In a tournament with $7$ teams, each team plays one match with every other team. For each match, the team earns two points if it wins, one point if it ties, and no points if it loses. At the end of all matches, the teams are ordered in the descending order of their total points (the order among the teams with the same total are determined by a whimsical tournament referee). The first three teams in this ordering are then chosen to play in the next round. What is the minimum total number of points a team must earn in order to be guaranteed a place in the next round? $13$ My attempt: We have $7$ team and each team will be play $6$ matches. Possible scenario for the maximum marks of third team: First team won $4$ matches with last $4$ teams and two ties with second and third team then total possible marks of first $=4\times2+1+1=10$ Second team won $4$ matches with last $4$ teams and two ties with first and third team then total possible marks of second $=4\times2+1+1=10$ Third team won $4$ matches with last $4$ teams and two ties with first and second team then total possible marks of first $=4\times2+1+1=10$ Now any of lost $4$ teams can be upto $=3\times2=6$ marks, since last four teams lost three matches each with first, second and third team. So, maximum marks can be $10$ for third team. Can you explain in formal way? please. probability combinatorics discrete-mathematics $\begingroup$ If a team wins all mathes then max marks are $12$ i didnt get you exactly $\endgroup$ – Archis Welankar Apr 10 '16 at 7:31 $\begingroup$ @ArchisWelankar , question ask maximum of third team not for first team. and in that way, second teams can be max marks $10$ and third team $8$ or both second and third can get maximum marks $9$ each. Third team will never get maximum marks $10$. Am I right? $\endgroup$ – 1 0 Apr 10 '16 at 8:45 You are correct that the maximum points for the third place team is 10 points. However the question asked for the minimum number of points that would guarantee at least a third place finish. For example, if a team would get 9 points, are they a guaranteed a top-3 finish? The answer is that 9 points is not a guarantee since it is possible that the top 4 teams all finish with 9 points. This would happen if each of the top 4 had one win, one loss, and one tie against the other 3 teams in the top-4, while all beating the bottom 3 teams. However 10 points would guarantee a top-3 finish. To prove this, consider that there are 21 total games, which means there are 42 total points awarded. The top 4 teams can combine for a maximum 36 points since the other 3 teams will get 6 points in the games played amongst themselves. If 4 teams split 36 points, it is impossible for all 4 teams to get 10 points or more, therefore 10 points guarantees being in the top-3. In conclusion, 10 is the minimum number of points that would guarantee a team at least a third place finish. browngreenbrowngreen Not the answer you're looking for? Browse other questions tagged probability combinatorics discrete-mathematics or ask your own question. Teams in tournament problem Finding the minimum wins in a round-robin tournament. Find minimum and maximum wins required in a $8$ team tournament Are the rules of this tournament fair? Biggest number of teams with 16 wins in a tournament What is the probability that the tournament ends without a clear winner? Proving that no two teams in a tournament win same number of games Prove that if no team loses all of its games, then at least two teams will finish the tournament with the same number of wins rank probability distribution in a round robin tournament for a fixed number of wins Elimination in a tournament
CommonCrawl
Challenging the sustainability of urban beekeeping using evidence from Swiss cities | npj Urban Sustainability View all journals Search Log in Explore content About the journal Publish with us Sign up for alerts RSS feed nature npj urban sustainability brief communications article Challenging the sustainability of urban beekeeping using evidence from Swiss cities Download PDF Your article has downloaded Urban conservation gardening in the decade of restoration Josiane Segar, Corey T. Callaghan, … Ingmar R. Staude Urban biodiversity arks H. Bradley Shaffer Regenerative living cities and the urban climate–biodiversity–wellbeing nexus M. Pedersen Zari, M. MacKinnon, … N. Bakshi Mapping the benefits of nature in cities with the InVEST software P. Hamel, A. D. Guerry, … G. C. Daily Cities should respond to the biodiversity extinction crisis Cathy Oke, Sarah A. Bekessy, … Steve Gawler Impacts of urban expansion on natural habitats in global drylands Qiang Ren, Chunyang He, … Burak Güneralp Assessing the effectiveness of a national protected area network for carnivore conservation J. Terraube, J. Van doninck, … M. Cabeza Urban aliens and threatened near-naturals: Land-cover affects the species richness of alien- and threatened species in an urban-rural setting Tanja K. Petersen, James D. M. Speed, … Gunnar Austrheim The PAD-US-AR dataset: Measuring accessible and recreational parks in the contiguous United States Matthew H. E. M. Browning, Alessandro Rigolon, … Peter James Download PDF Brief Communication Open Access Published: 12 January 2022 Challenging the sustainability of urban beekeeping using evidence from Swiss cities Joan Casanelles-Abella ORCID: orcid.org/0000-0003-1924-9298 1 , 2 & Marco Moretti 1 npj Urban Sustainability volume 2 , Article number: 3 ( 2022 ) Cite this article Agroecology Science, technology and society Society Urban ecology An Author Correction to this article was published on 09 June 2022 Urban beekeeping is booming, heightening awareness of pollinator importance but also raising concerns that its fast growth might exceed existing resources and negatively impact urban biodiversity. To evaluate the magnitude of urban beekeeping growth and its sustainability, we analysed data on beehives and available resources in 14 Swiss cities in 2012–2018 and modelled the sustainability of urban beekeeping under different scenarios of available floral resources and existing carrying capacities. We found large increases in hives numbers across all cities from an average 6.48 hives per km 2 (3139 hives in total) in 2012 to an average 8.1 hives per km 2 (6370 in total) in 2018 and observed that available resources are insufficient to maintain present densities of beehives, which currently are unsustainable. Cities are increasingly committed to sustainability goals (Sustainable Development Goal 11), leading to initiatives to enhance environmental justice and promote biodiversity and nature's contributions to people. This has led to an increase in greening activities in urban green spaces (UGS) and pro-environmental behaviours by local residents and city governments 1 . For example, in recent years beekeeping has increased in several cities in Europe, partly as a reaction to the current biodiversity crisis 2 . Honeybees ( Apis mellifera ) are perceived as key pollinators, and beekeeping is therefore seen by many as a conservation effort rather than an agricultural practice, which has generated debate and critisism 2 , 3 . Large honeybee densities can enhance competition between wild bees and honeybees 4 , 5 , 6 , particularly when resource availability is low 7 , although the interplay of other factors (e.g. spatial and temporal context 7 , 8 ) complicates the picture. Nonetheless, although data is still scarce, the rapid, unregulated increase of urban beekeeping in a growing number of cities worldwide (e.g. London 9 , Paris 10 , Perth 7 ) and the documented effects on other pollinator groups 10 has questioned the sustainability of urban beekeeping. Beekeeping is a particular form of livestock raising. Livestock are in large part dependent on the resources provided by their owners, and beekeeping represents a special case for four reasons. First, beekeepers do not need to provide their own floral resources, as honeybees can move freely and exploit available resources. Second, it is impossible to control the movements and foraging locations of honeybees. Third, honeybees reproduce faster than other livestock. Fourth, beekeeping might not be perceived as an exploitative activity (regarding floral resources) because of the positive association between honeybees and pollination services. Still, floral resources might be limited, also in cities. For example, research in London has shown that in a large part of the city the existing resources are insufficient to maintain the current number of honeybees 9 . To improve our knowledge on the sustainability of urban beekeeping, in this study we aimed to answer the following questions: (1) To what extent is urban beekeeping increasing? and (2) Are the available floral resources sufficient to sustain the growing numbers of honeybees? To do so, we analysed urban beekeeping data from 14 cities in Switzerland, which represents a model country as it is compulsory to register beehives, over the period 2012–2018 (Fig. 1a , Supplementary Fig. 1 , see "Methods") and landcover data at an unprecedented resolution. Eleven of the cities had data on precise spatial distributions and the number of beehives per location. For these eleven cities, we modelled the sustainability of urban beekeeping under different scenarios of available floral resources and existing carrying capacities. Fig. 1: Urban beekeeping trends in Swiss cities for the period 2012–2018. a Number of honeybee hives per year for all 14 Swiss cities. Each line and colour represents a single city. b Response curves showing the number of hives per beekeeping location per year for all of the Swiss cities except Geneva and Chur (where spatially explicit data were of low quality). Lines represent linear models and bands indicate 95% confidence intervals. Each line and colour represents a single city. c Percentage of cells in each city with an increase (green), decrease (red) or no change (dark grey) in the number of hives. See also Supplementary Tables 1 and 2 and Supplementary Fig. 2 . Increase in urban beekeeping in Swiss cities To assess how much beekeeping has changed in these cities over the considered period, we calculated the increase in the number of beekeeping locations and hives. We found that the total number of hives increased in 12 of the 14 cities (median increase = 69%, min. 1% in Thun, max. 2387% in Lugano). Further, in most cities the number of beekeeping locations (and possibly the number of beekeepers) increased (Supplementary Fig. 2 ), leading to a slightly lower ratio of the number of hives to the number of beekeeping locations (Fig. 1 ). Although the motivations for beekeeping are unknown, our results suggest that in most of the considered cities beekeeping is pursued by several people, each with a relatively small number of hives. To investigate the temporal trend of beekeeping inside the cities, we first excluded cities without spatially explicit data (i.e. Basel, Chur and Geneva). We then divided each remaining city into 1 km 2 cells and counted the number of hives per cell per year. We found that the number of hives increased from 2012 to 2018 in the majority of cells and cities (Fig. 1 ), by 1 to 8 hives in 75% of the cells and by up to 198 hives in a few extreme cases. Assessing the sustainability of urban beekeeping We defined sustainable urban beekeeping (i.e. green cells in Fig. 2 ) as situations where the available UGS in a cell exceeds the UGS required to support existing beehives. UGS was used as a proxy for floral resources available for honeybees, obtained from 11 (see Methods) and assumed to be uniform in quality. We estimated the required UGS in a given cell using a carrying capacity value representing the maximum number of hives that can be sustained in a cell covered 100% by UGS. We modelled the sustainability under different scenarios of both carrying capacity (sustainable number of hives in a 1 km 2 cell) and available floral resources (amount of UGS in each 1 km 2 cell) for 2012 and 2018 (see Methods). Initially, following 9 , we considered a carrying capacity of 7.5 beehives per km 2 of UGS. Because any carrying capacity value comes with assumptions, we additionally considered different carrying capacity scenarios, ranging from 1 to an unrealistic value of 75 hives per km 2 (10 times greater than in 9 ). To further explore possible actions to enhance sustainable urban beekeeping, we simulated increases in available UGS from 0% (no increase) to 100%. Fig. 2: Sustainability of urban beekeeping under different scenarios regarding the amount of available urban green space (UGS) and the carrying capacity for the years 2012 and 2018. a – d The proportion of cells with a negative UGS balance (Y-axis) (i.e. the available UGS is smaller than the required UGS based on the existing number of hives in the cell) for different carrying capacity values ( X -axis) for all cities in 2012 ( a and b ) and 2018 ( c and d ), and considering a 0% ( a and c ) or 50% ( b and d ) increase in available UGS. Each coloured line represents the model for one city. Dashed red vertical lines indicate the carrying capacities used in the plots ( e – l ) and represent 7.5 and 75 hives per 1 km 2 . e – l An example of the spatial distribution of the UGS balance in the cells in the city of Zurich for 2012 ( e , f , i , j ) and 2018 ( g , h , k , l ) considering an increase in both available UGS, from 0% ( e , g , i , k ) to 100% ( f , h , j , l ), and carrying capacity, from 7.5 ( e – h ) to 75 ( i – l ) hives per km 2 . See Supplementary Figs. 3 and 4 for additional scenarios, Supplementary Fig. 5 for the maps for all cities and Supplementary Fig. 6 for additional information on the balances in the UGS. Data supporting this figure can be found in Supplementary Data 2 . Orthophotos from Zurich were obtained from the SWISSIMAGE 25 from the Federal Office of Topography Swisstopo openly accessible 30 . We found that in all cities, the estimated amount of available UGS was not enough to maintain the number of beehives in either 2012 or 2018 (Fig. 2 ). Cities such as Lugano, Zurich and Luzern had a particularly strong negative UGS balance (see Fig. 2 for Zurich and Supplementary Fig. 5 for the remaining cities; see Methods for calculation). Only carrying capacities 20 hives per km 2 resulted in 50% of cells with a positive UGS balance (Fig. 2 ). These carrying capacities are unlikely to be realistic, as they exceed the current honeybee densities in Switzerland of ca. 2 hives per km 2 (see ref. 6 ). Increasing available UGS had a limited effect on the number of sustainable cells, in contrast to increasing the carrying capacity, and beekeeping remained unsustainable in most of the cells even with increases of 50% (Fig. 2 and Supplementary Figs. 3 – 5 ). Comparison between 2012 and 2018 showed a clear densification and expansion of beekeeping (Fig. 2 ). While all cities had some cells without beehives in 2012, most occupied cells had a negative UGS balance (Fig. 2 ). In 2018, cities had a median increase of 52% (min. 5%, max. 983%) in the number of occupied cells compared with in 2012, and most of the occupied cells were unsustainable (Fig. 2 , Supplementary Fig. 6 ). Our results are in line with the increasing beekeeping trends observed in other cities 9 , 10 . In addition, our analysis suggests that the available UGS in cities might not be sufficient to cope with the current pace at which urban beekeeping is growing. The available UGS in both occupied and unoccupied cells might still be able to sustain current honeybee populations. However, continuous increases in the number of hives, with UGS likely not increasing at an equal pace, pose a challenging scenario in the near future for honeybees, not to mention other pollinating species which we did not consider here. Urban beekeeping is a relatively new activity, yet there is a lack of regulation concerning sustainable densities, and increased beehive densities might have negative effects on biodiversity and on honeybees themselves. High densities of honeybee hives have been shown to deplete existing resources in natural 4 and agricultural 8 areas, ultimately negatively affecting other pollinators 4 . Concerning urban ecosystems, in Paris the density of beehives was found to be negatively related to wild pollinator visitation rates 10 , yet in Perth, where honeybees are not native, the effect of urban beekeeping on wild bees was mixed 7 . Cities are social-ecological systems, and individual decisions of stakeholders can have important impacts on the whole ecosystem. Adding hives in new and existing beekeeping locations might result in strong pressure on available resources. In agricultural contexts, uncontrolled increases of other livestock have led to what is known as a "tragedy of the commons" 12 , when an uncoordinated and unregulated exploitation results in the depletion of the common resources. The same applies to urban beekeeping, but in an even more complex situation. Honeybees are not spatially limited and can exploit the available resources freely, regardless of ownership. This skews the perception of the relationship between the consumed and available resources, and thus of the sustainability of the system. Our study represents a first attempt to quantify the sustainability of urban beekeeping yet with limitations. First, when we estimated the UGS we assumed all land covers equal and probably overestimated the available UGS. Future studies might improve the estimation of the UGS, and of resource availability, by better accounting by quantity, diversity and quality of floral resources provided by different UGS types (e.g. see ref. 13 ) and estimate them across types of UGS in different cities as in 14 , which could be combined with high resolution landcover maps such as 11 . Second, we could not incorporate the responses (e.g. densities, population dynamics) of both honeybees and other flower-visiting insects species with which honeybees might and can compete 10 . While population responses of wild pollinators might be difficult to obtain, they could be obtained more easily in honeybees through monitoring programmes. There is a pressing need to create sustainable management strategies for urban beekeeping. Urban ecosystems can contain often important levels of biodiversity including pollinators 14 , 15 , and thus need to be integrated in the current biodiversity conservation frameworks (e.g. IPBES 16 , IUCN 17 ). Concerning pollinators, anthropogenic activities such as urban beekeeping represent an critical challenge that has to be addressed to make use of the opportunities for conservation that urban ecosystem provide 3 , 18 . Managing beekeeping is a challenging task, especially in cities, due to the spatial scale at which it occurs, the prevailing positive view of honeybees and the services they provide, and the existing trade-offs between biodiversity conservation and anthropogenic activities. Nonetheless, the increasing number of evidences pointing out the unsustainability of (urban) beekeeping, including our study, calls for interventions to ensure a proper regulation. These interventions should result from a transdisciplinary engagement of both scientific research, urban policies and citizens as proposed by 3 . For instance, feasible, practical interventions could include: (1) regulating the number of beekeepers (or beekeeping locations) and the densities of hives 3 , 19 , (2) ensuring a sufficient distance between hives as proposed by 19 , (3) enhance floral resources and pollinator habitats in cities. This could be achieved by restoring existing impoverished habitats (e.g. transformation of lawns into grasslands 20 , promote wild plants in small vegetation patches such as tree pits 21 ) or by creating novel ones 22 . In that regard, citizen engagement promise to be a key tool 23 . Study cities We selected a total of 14 cities and urban agglomerations in Switzerland (Fig. 1 ). They were selected according to their population, area and availability of urban beekeeping data (Supplementary Table 2 ). Each studied city and urban agglomeration was divided into 1 × 1 km cells (Supplementary Fig. 1 ). Urban beekeeping Annual data on the spatial distribution of beekeeping locations and the number of hives at each location in the studied areas were obtained from the cantonal veterinary offices. Switzerland represents an exemplary country as beehive registration is compulsory since 2010 24 . The considered period was 2012–2018. As exceptions, data were only available from the period 2012–2014 for Basel and from 2013–2018 for Lausanne (Supplementary Table 3 ). The data from each veterinary office were checked separately and only records of beekeeping locations with reliable coordinates were included. For Chur and Geneva, where the beekeeping locations did not have precise coordinates, and in Basel, we only used the available data to study the increase in the number of hives over time. Available urban greenspace Data on available urban greenspace (UGS) were obtained from a continental-scale land-cover map of Europe (ELC10 11 ). With a resolution of 10 m, the ELC10 map is currently the most detailed land-cover map of Europe, and it can distinguish between main features of the cityscape, such as gardens and hedgerows 6 . The ELC10 map was generated by classifying satellite imagery (Sentinel-1 and Sentinel-2) into eight classes of land cover using machine learning algorithms using data from the year 2018 11 . We considered the following land-cover classes as UGS: cropland, woodland, shrubland, grassland and wetland. For simplicity, we assumed (1) equal floral resources in all these land-cover classes, although they are expected to vary greatly, and (2) the same land cover composition in 2012 and 2018. We additionally simulated increases in the amount of available UGS by adding percentages to the original values in intervals of 10%, ranging from 0 to 100%. Spatial data process, including calculations on UGS and number of hives and beekeeping locations was done in QGIS v.3.10 25 . We calculated the required UGS and the UGS balance for 2013 and 2018 in Lausanne, and for 2012 and 2018 in the remaining 10 cities. In a given city, for each cell and each year, we first calculated the total number of honeybee hives. We then calculated the required UGS in each cell according to the number of hives present and an estimated carrying capacity value, i.e. the maximum number of honeybee hives that can be sustain in 1 km 2 of UGS. The UGS balance in a given year was calculated by subtracting the required UGS in a given cell from the available UGS in that cell. Equation ( 1 ) shows the calculation of the available UGS, Eq. ( 2 ) shows the calculation of the required UGS and Eq. ( 3 ) shows the calculation of the UGS balance: $${{{\mathrm{Available}}}}\,{{{\mathrm{UGS}}}}_{ij} = {{{\mathrm{AvailableECL}}}}10_{ij} + {{{\mathrm{AvailableECL}}}}10_{ij} \ast I$$ (1) $${{{\mathrm{Required}}}}\,{{{\mathrm{UGS}}}}_{ij} = \frac{{{{{\mathrm{N}}}}_{ij}}}{{\rm{CCV}}}$$ (2) $${{{\mathrm{Balance}}}}\,{{{\mathrm{UGS}}}}_{ij} = {{{\mathrm{Available}}}}\,{{{\mathrm{UGS}}}}_{ij} - {{{\mathrm{Required}}}}\,{{{\mathrm{UGS}}}}_{ij}$$ (3) where i is the cell, j is the city, I is the simulated percentage of increase (in decimal form) in available UGS, N is the number of hives, CCV is the assumed carrying capacity and \({{{\mathrm{AvailableECL}}}}10\) is the amount of available UGS based on the ECL10 map, without an increase. The UGS balance was calculated for the different carrying capacity scenarios and increases in available UGS. Finally, for each city we calculated the proportion of cells with a positive balance (i.e. the required UGS for beekeeping was smaller than the available UGS) and with a negative balance (i.e. the required UGS for beekeeping was larger than the available UGS). All calculations were completed using R version 4.0.1 26 in RStudio version 1.4.1106 27 . Raw data on urban beekeeping can be obtained from cantonal veterinary offices under confidentiality agreement. Raw data on the land cover is available online in the Zenodo repository under the https://doi.org/10.5281/zenodo.4407051 (see ref. 11 ). Processed data used for the analyses can be found in the repository ENVIDAT under the https://doi.org/10.16904/envidat.239 28 . Data that support the findings of this study is presented within the main text, figures and the supplementary material. Orthophotos from Switzerland at 25 m resolution can be obtained from the Federal Office of Topography Swisstopo ( https://www.swisstopo.admin.ch/en/geodata/images/ortho/swissimage25.html#links ). Code is available from Zenodo under the https://doi.org/10.5281/zenodo.5618254 29 . A Correction to this paper has been published: https://doi.org/10.1038/s42949-022-00059-9 Federal Office for the Environment (FOEN). Action Plan for the Swiss Biodiversity Strategy (FOEN, Bern, 2017). Geldmann, J. & González-Varo, J. P. Conserving honey bees does not help wildlife. Science 359 , 392–393 (2018). Egerer, M. & Kowarik, I. Confronting the modern gordian knot of urban beekeeping. Trends Ecol. Evol. 35 , 956–959 (2020). Herrera, C. M. Gradual replacement of wild bees by honeybees in flowers of the Mediterranean Basin over the last 50 years. Proc. R. Soc. B Biol. Sci. 287 , 16–20 (2020). Torné-Noguera, A., Rodrigo, A., Osorio, S. & Bosch, J. Collateral effects of beekeeping: impacts on pollen-nectar resources and wild bee communities. Basic Appl. Ecol. 17 , 199–209 (2016). Magrach, A., González-Varo, J. P., Boiffier, M., Vilà, M. & Bartomeus, I. Honeybee spillover reshuffles pollinator diets and affects plant reproductive success. Nat. Ecol. Evol. 1 , 1299–1307 (2017). Prendergast, K. S., DIxon, K. W. & Bateman, P. W. Interactions between the introduced European honey bee and native bees in urban areas varies by year, habitat type and native bee guild. Biol. J. Linn. Soc. 133 , 725–743 (2021). Herbertsson, L., Lindström, S. A. M., Rundlöf, M., Bommarco, R. & Smith, H. G. Competition between managed honeybees and wild bumblebees depends on landscape context. Basic Appl. Ecol. 17 , 609–616 (2016). Stevenson, P. C. et al. The state of the world's urban ecosystems: what can we learn from trees, fungi, and bees? Plants People Planet 2 , 482–498 (2020). Ropars, L., Dajoz, I., Fontaine, C., Muratet, A. & Geslin, B. Wild pollinator activity negatively related to honey bee colony densities in urban context. PLoS ONE 14 , e0222316 (2019). Venter, Z. S. & Sydenham, M. A. K. Continental-scale land cover mapping at 10 m resolution over Europe (ELC10). Remote Sens. 13 , 2301 (2021). Hardin, G. The tragedy of the commons. Science 162 , 1243–1248 (1968). Tew, N. E. et al. Quantifying nectar production by flowering plants in urban and rural landscapes. J. Ecol. 109 , 1747–1757 (2021). Baldock, K. C. R. et al. A systems approach reveals urban pollinator hotspots and conservation opportunities. Nat. Ecol. Evol. 3 , 363–373 (2019). Casanelles-Abella, J. et al. Applying predictive models to study the ecological properties of urban ecosystems: A case study in Zürich, Switzerland. Landsc. Urban Plan. 214 , 104137 (2021). IPBES. Summary for policymakers of the global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES, 2019). IUCN. IUCN's Key Messages. First Draft of the Post-2020 Global Biodiversity Framework. In Convention on Biological Diversity Third meeting of the Open-Ended Working Group on the Post-2020 Global Biodiversity Framework (OEWG3) 15 (IUCN, 2021). Baldock, K. C. R. Opportunities and threats for pollinator conservation in global towns and cities. Curr. Opin. Insect Sci. 38 , 63–71 (2020). Henry, M. & Rodet, G. The apiary influence range: a new paradigm for managing the cohabitation of honey bees and wild bee communities. Acta Oecologica 105 , 103555 (2020). Ignatieva, M. & Hedblom, M. An alternative urban green carpet. Science 362 , 148–149 (2018). Vega, K. A. & Küffer, C. Promoting wildflower biodiversity in dense and green cities: The important role of small vegetation patches. Urban For. Urban Green. 62 , 127165 (2021). Fabián, D., González, E., Sánchez Domínguez, M. V., Salvo, A. & Fenoglio, M. S. Towards the design of biodiverse green roofs in Argentina: assessing key elements for different functional groups of arthropods. Urban For. Urban Green . 61 (2021). Vega, K. A., Schläpfer-Miller, J. & Kueffer, C. Discovering the wild side of urban plants through public engagement. Plants People Planet 3 , 389–401 (2021). Von Büren, R. S., Oehen, B., Kuhn, N. J. & Erler, S. High-resolution maps of Swiss apiaries and their applicability to study spatial distribution of bacterial honey bee brood diseases. PeerJ 2019 , 1–21 (2019). QGIS Development Team. QGIS Geographic Information System (Open Source Geospatial Foundation Project, 2020). R Core Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, 2019). R Studio Team. R studio: Integrated Development for R (RStudio, Boston, 2020). Casanelles-Abella, J. & Moretti M. Challenging the sustainability of urban beekeeping: evidence from Swiss cities. Envidat. https://doi.org/10.16904/envidat.239 (2021). Casanelles-Abella, J. Code for the paper: Challenging the sustainability of urban beekeeping: evidence from Swiss cities (v. 1.0). Zenodo https://doi.org/10.5281/zenodo.5618254 (2021). Swiss Fedearl Office of Topography Swisstopo. SWISSIMAGE 25. https://www.swisstopo.admin.ch/en/geodata/images/ortho/swissimage25.html#links (Swisstopo, 2021). This study was funded by the Swiss National Science Foundation (project 31BD30_172467) within the programme ERA-Net BiodivERsA project "BioVEINS: Connectivity of green and blue infrastructures: living veins for biodiverse and healthy cities" (H2020 BiodivERsA32015104) and by the Swiss Federal Office for the Enviornment (FOEN) in the frame of the project "City4Bees" (contract no. 16.0101.PJ/S284-0366). We also acknowledge the Göhner Stiftung for supporting this project. We acknowledge Melissa Dawes for the language editing. We particularly thank Monika Egerer for her comments and thoughts on the manuscript. We also thank Debora Zaugg from the FOEN for supporting this project, and two anonymous reviewers for their comments on the manuscript. Biodiversity and Conservation Biology, Swiss Federal Institute for Forest, Snow and Landscape Research WSL, Birmensdorf, Switzerland Joan Casanelles-Abella & Marco Moretti Lansdcape Ecology, Institute of Terrestrial Ecosystems, ETH Zurich, Zurich, Switzerland Joan Casanelles-Abella Authors Joan Casanelles-Abella View author publications Marco Moretti View author publications J.C.A. and M.M. conceived the study and collected the data. J.C.A. analysed the data. J.C.A. and M.M. wrote and corrected the manuscript. Correspondence to Joan Casanelles-Abella . Dataset 1. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ . Casanelles-Abella, J., Moretti, M. Challenging the sustainability of urban beekeeping using evidence from Swiss cities. npj Urban Sustain 2 , 3 (2022). https://doi.org/10.1038/s42949-021-00046-6 Received : 30 August 2021 Accepted : 03 December 2021 Published : 12 January 2022 DOI : https://doi.org/10.1038/s42949-021-00046-6 Research articles Reviews & Analysis News & Comment Collections Follow us on Twitter Sign up for alerts RSS feed Aims & Scope Journal Information Content types About the Editors Contact Open Access Article Processing Charges Editorial policies Journal Metrics About the Partner 5 Questions With Our Editorial Board Editor's Perspective: World Cities Day Editors' Perspective: Urban Transformations Letter from the Editor Calls for Papers For Authors and Referees Language editing services Submit manuscript Explore articles by subject Find a job Guide to authors Editorial policies npj Urban Sustainability ( npj Urban Sustain ) ISSN 2661-8001 (online) About us Press releases Press office Contact us Journals A-Z Articles by subject Nano Protocol Exchange Nature Index Nature portfolio policies Open access Reprints & permissions Research data Language editing Scientific editing Nature Masterclasses Nature Research Academies Research Solutions Librarian service & tools Librarian portal Open research Recommend to library Advertising Partnerships & Services Media kits Branded content Nature Careers Nature Conferences Nature events Nature Africa Nature China Nature India Nature Italy Nature Japan Nature Korea Nature Middle East Privacy Policy Use of cookies Manage cookies/Do not sell my data Legal notice Accessibility statement Terms & Conditions California Privacy Statement Close Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Protein changes as robust signatures of fish chronic stress: a proteomics approach to fish welfare research Cláudia Raposo de Magalhães1, Denise Schrama1, Ana Paula Farinha1, Dominique Revets2, Annette Kuehn2, Sébastien Planchon3, Pedro Miguel Rodrigues1 & Marco Cerqueira ORCID: orcid.org/0000-0001-7237-50531 Aquaculture is a fast-growing industry and therefore welfare and environmental impact have become of utmost importance. Preventing stress associated to common aquaculture practices and optimizing the fish stress response by quantification of the stress level, are important steps towards the improvement of welfare standards. Stress is characterized by a cascade of physiological responses that, in-turn, induce further changes at the whole-animal level. These can either increase fitness or impair welfare. Nevertheless, monitorization of this dynamic process has, up until now, relied on indicators that are only a snapshot of the stress level experienced. Promising technological tools, such as proteomics, allow an unbiased approach for the discovery of potential biomarkers for stress monitoring. Within this scope, using Gilthead seabream (Sparus aurata) as a model, three chronic stress conditions, namely overcrowding, handling and hypoxia, were employed to evaluate the potential of the fish protein-based adaptations as reliable signatures of chronic stress, in contrast with the commonly used hormonal and metabolic indicators. A broad spectrum of biological variation regarding cortisol and glucose levels was observed, the values of which rose higher in net-handled fish. In this sense, a potential pattern of stressor-specificity was clear, as the level of response varied markedly between a persistent (crowding) and a repetitive stressor (handling). Gel-based proteomics analysis of the plasma proteome also revealed that net-handled fish had the highest number of differential proteins, compared to the other trials. Mass spectrometric analysis, followed by gene ontology enrichment and protein-protein interaction analyses, characterized those as humoral components of the innate immune system and key elements of the response to stimulus. Overall, this study represents the first screening of more reliable signatures of physiological adaptation to chronic stress in fish, allowing the future development of novel biomarker models to monitor fish welfare. Managing welfare of fish in captivity is of increasing importance, both for productivity and sustainability reasons [1]. There is still no clear consensus on how welfare should be defined or objectively measured, given the complexity and controversy of the concept [2, 3]. Challenges like divergent coping mechanisms, the incomplete knowledge regarding the nociceptive system of fish (e.g. emotional-like states; cognitive abilities, pain, suffering) [4,5,6,7] and the lack of reliable physiological indicators of fish welfare, make its investigation even more difficult [8]. An aquaculture rearing facility deals with multiple stressful situations (stressors) that are inherent to daily routines and can compromise the fish well-being. These situations are usually unpredictable and uncontrollable for the animal and can range in duration and severity [8, 9]. Fish launch a physiological response when faced with these threatening situations [9, 10]. This adaptive mechanism, known as stress response, involves a cascade of reactions and enables the fish to cope with the stressor. However, when a stressful event is repeated or prolonged, it exceeds the organism's natural regulatory capacity and the fish fails to regain homeostasis, consequently impairing welfare [11, 12]. The physiological stress response starts with the immediate activation of the sympathetic response, followed by a slightly delayed activation of the hypothalamo-pituitary-interrenal (HPI) axis. As a result, catecholamines and corticosteroids (cortisol in teleosts), respectively, are released into the bloodstream [13, 14]. These hormones lead to a series of downstream responses involving alterations in the energy metabolism and respiratory and immune functions [15]. The rapid mobilization of energy substrates such as glucose (the fuel needed for the coping mechanisms) is caused by the activation of both the glycogenolysis in the liver or muscle, and the hepatic gluconeogenesis, by the catecholamines and cortisol, respectively [16, 17]. Stressful stimuli can also lead to strenuous exercise fuelled by anaerobic glycolysis in the muscle, generating lactate, which is then released into plasma [18, 19]. Prolonged exposure to the stressor will inevitably lead to alterations that are reflected in the whole-animal's performance, like perturbations at the reproduction, immunological, growth and behaviour levels [20]. The plasma levels of cortisol, alongside glucose and lactate, are the most commonly used physiological indicators to assess stress in fish [21]. Nevertheless, some inconsistencies have been reported in several experimental studies. This demonstrates the unreliability of these indicators, mainly in cases of chronic stress, which is mostly due to: (i) a high variability of response levels; (ii) a decrease of the cortisol levels to basal levels within minutes/hours following an acute stressor; (iii) the fact that fish can adapt, to certain extent, to chronic stress and the cortisol response is therefore attenuated; and (iv) the intrinsic and extrinsic factors (e.g. age, sexual maturity, social status, level of domestication, prior experience, nutritional status) that can affect cortisol secretion [22,23,24,25,26,27]. In this sense, it is vital to complement the existing behavioural, biochemical and physiological measures for a correct interpretation of the welfare status of the fish. This will be crucial to form a robust welfare assessment and to allow, in the future, the development of targeted recommendations and legislation. With the increasing research into the welfare of cultured fish, more advanced technologies are gaining popularity. Proteomics are promising alternatives for the discovery of candidate molecular markers that can indicate physiological alterations due to stress exposure [28]. Despite the limitations to the use of these technologies in the aquaculture field [29], several studies prove already the huge potential of proteomics for the identification of stress signatures [30,31,32,33,34]. There is very little data available concerning the process of long-term coping with a chronic stressor and indicators used in this case. Considering this gap in research, we aim, in the present study, to comparatively assess the stress responses of fish at different levels (i.e., plasma stress markers, changes in plasma proteins' abundance and muscle biochemistry). Using Gilthead seabream (Sparus aurata) as model, three chronic stress conditions were employed, and proteomics was used to benchmark potential signatures of stress adaptation in the plasma proteome since several proteins resulting from physiological events are released into circulation. Gilthead seabream was the chosen species in this study since it is one of the most important species in European aquaculture with high commercial value. This work aims to pioneer a better understanding of the underlying molecular mechanisms behind the fish physiological adaptation to long-term stress. Additionally, it aims to bridge the gap between the scientific community and the industry by paving the way for the development of novel biomarkers to monitor fish welfare. Fish general condition Fish were monitored every day during the trials. The experimental periods reached the end with a 100% survival rate. The overall condition and growth performance of the fish were also monitored (Additional file 1), and initial (IBW) and final body weights (FBW) were recorded for each experiment. The average body weight was reduced by the end of the net handling (NET) and hypoxia (HYP) trials, in all groups, including the control. However, there were no significant differences in final body weights between the control group and any of the stressed groups (P > 0.05), suggesting that weight reductions were unrelated to the stressor. Plasma stress markers analysis Circulating cortisol, glucose, and lactate levels were measured in Gilthead seabream submitted to different chronic stressors and in control fish (Fig.1). The overall levels of these metabolites showed a high variability of biological responses in all trials, with several data points considered outliers (outside of the interquartile range). Cortisol levels presented the highest intervals of values. In the OC trial, only lactate plasma levels presented statistically significant differences, both between control and stressed groups (LactateCTRL – 13.46 ± 4.07, LactateOC30–17.79 ± 5.29, LactateOC45–19.89 ± 6.19, P = 2.89e− 3). Curiously, in the case of cortisol, although not significant, stressed fish showed decreased levels compared to control (CortisolCTRL – 28.03 ± 30.78, CortisolOC30–12.51 ± 13.13, CortisolOC45–8.54 ± 10.92, P = 1.20e− 1). In the NET trial, statistically significant differences were registered for the cortisol and glucose plasma levels, again between control and the stressed groups (CortisolCTRL – 29.38 ± 38.06, CortisolNET4–55.69 ± 41.05, CortisolNET4–84.83 ± 50.77, P = 5.15e− 4; GlucoseCTRL – 46.57 ± 6.58, GlucoseNET2–69.76 ± 12.90, GlucoseNET4–66.60 ± 13.74, P = 2.06e− 6). In the HYP trial, significant differences were only observed in the glucose levels (GlucoseCTRL – 55.85 ± 12.72, GlucoseHYP15–96.22 ± 45.53, GlucoseHYP15–79.30 ± 15.78, P = 1.07e− 4). Cortisol values are presented as mean ± standard deviation (S.D.) in ng/ml, and glucose and lactate values in mg/dl. Violin plots showing the distributions of plasma cortisol (ng/ml), glucose (mg/dl) and lactate (mg/dl) levels of gilthead seabream (Sparus aurata) submitted to different chronic stressors (a – overcrowding, b – net handling, c – hypoxia), in two intensities, and unstressed fish (control) (n = 18). The box-plot inside includes observations from the 25th to the 75th percentiles as determined by R software; the horizontal line indicates the median value. Whiskers extend 1.5 times the interquartile range. Single data points are outlying data. *P < 0.05; **P < 0.01; ***P < 0.001; **** P < 0.0001. NS (not significant) indicates a P-value greater than 0.05 Post-mortem muscle biochemical changes Muscle pH declined over 72 HAD, in Gilthead seabream stored in ice. Values ranged from an average of 7.4, 7.7 and 7.4 immediately after slaughtering, to 6.3, 6.5 and 6.4 at the last sampling time, in fish from OC, NET and HYP trials, respectively. Significant differences between conditions were found for the NET trial at 4 (PNET2-NET4 = 0.032) and 72 HAD (PCTRL-NET2 = 0.008), and for the HYP trial at 0 (PCTRL-HYP15 = 0.021), 8 (PCTRL-HYP15 = 0.003, also in HYP30-HYP15 with lower significance), 48 (PCTRL-HYP15 = 0.006) and 72 HAD (PHYP30-HYP15 < 0.001) (Fig. 2). Post-mortem changes in muscle pH and rigor mortis of gilthead seabream (Sparus aurata) submitted to different chronic stressors (a – overcrowding, b – net handling, c – hypoxia), in two intensities, and unstressed fish (control), stored in ice for 72 h. Data points are the mean ± S.D. of n = 9 for each sampling time. Means labelled * are different at P < 0.05 The onset and resolution of rigor mortis (Fig.2) showed significant differences between treatments in the NET and HYP trials, specifically at 8 HAD (PCTRL-NET4 < 0.001), and at 8 (PHYP30-HYP15 < 0.001) and 24 HAD (PHYP30-HYP15 = 0.020), respectively. In the OC trial, fish reached an average maximum rigor strength at 24 HAD. In the NET trial, averaged maximum rigor strength was reached at 48 HAD in CTRL and NET2, and at 24 HAD in NET4 group. In the HYP trial, all groups reached averaged maximum rigor strength at 48 HAD. Plasma proteomics analysis A comparative proteomics analysis of the Gilthead seabream plasma between the control and the stress treatments detected, 681, 752 and 681 protein spots for the OC, NET and HYP trials, respectively, within the pH range of 4–7 and a molecular mass range of 11–114 kDa. After statistical analysis, 19, 360 and 34 protein spots within the OC, NET and HYP trials, respectively, were found to present significantly differential abundance (significance threshold at P < 0.05) between experimental conditions. From these, 7, 171 and 12 were manually excised from the 2D gels for MALDI-TOF/TOF MS analysis. No proteins were identified with significance for the OC trial. For the NET and HYP trials, 107 and 2 differential protein spots, respectively, were successfully identified by a combination of PMF and MS/MS search, with significant scores (protein score > 76, total ion score > 60, P < 0.05). Among the spots identified from the NET trial, 13 showed more than one significant protein identification (202, 326, 521, 559, 586, 604, 677, 877, 950, 959, 990, 996 and 1157), indicating that multiple proteins migrated to the same spots on the gel. The identified proteins are listed in an additional file (see additional file 2). A representative 2D-gel of the Gilthead seabream plasma proteome is shown in Fig. 3. Representative pattern of gilthead seabream (Sparus aurata) blood plasma on a 12.5% polyacrylamide 2D gel. Black circles represent the 107 proteins identified by MALDI-TOF/TOF MS with significant differences in abundance in NET groups and black squares the 2 proteins with significant differences in abundance in HYP groups (P < 0.05) Considering the number of identifications in each trial, only the 107 identified protein spots from the NET trial were considered for further statistical and bioinformatics analyses. At this step, a log-fold change cut-off of ±1.0 (P < 0.05) was applied (Fig. 4-a) and a total of 56 identified protein spots (corresponding to 20 single entries) were considered significant. From these, 19 were up-regulated in stressed fish and 34 were down-regulated. Three spots (502, 990 and 1021) showed multi-expression patterns and could not be classified as up- or down-regulated. Seventeen protein spots (502, 715, 841, 905, 908, 919, 937, 939, 967, 997, 1004, 1016, 1021, 1151, 1221, 1238 and 1250) were identified as apolipoprotein A-I, whereas 13 were down-regulated in stressed fish. Four spots (864, 869, 990 and 996) were identified as apolipoprotein Eb and 2 were up-regulated. Complement factor B was identified in 4 spots (144, 146, 152 and 737) and complement component C3 in 5 spots (591, 593, 595, 1048 and 1083) from which 3 from each were up-regulated. Two protein spots (796 and 833) were identified as warm-temperature acclimation-related 65 kDa, 1 down- and 1 up-regulated. Three spots (202, 206 and 209), identified as inter-alpha-trypsin inhibitor heavy chain H3, 2 spots (224 and 229) as alpha-2-macroglobulin and 5 (558, 751, 843, 904 and 1079) identified as transferrin were down-regulated. Two spots (663 and 710), identified as haptoglobin, were found to be up-regulated. Fibrinogen alpha-chain was identified in two spots (521 and 544) and were both up-regulated. Alpha-1-antitrypsin homolog, apolipoprotein B-100, beta-actin, calcium/calmodulin-dependent protein kinase type II, leucine-rich alpha-2-glycoprotein, fetuin-B-like, hemopexin-like, hyaluronic acid-binding protein 2 and pentraxin were identified in a single protein spot each. a – Volcano plots of the entire set of plasma proteins detected by DIGE analysis on the NET trial samples. Each point represents the difference in abundance (fold-change) between stressed fish (NET2 on the left; NET4 on the right) and control fish plotted against the level of statistical significance. Dotted vertical lines represent a 2-fold variation in abundance, while dotted horizontal line represent the significance level of P < 0.05. Red dots represent proteins significantly up- and down-regulated. b – Principal component analysis performed with the normalized spot volumes of the 107 identified proteins in the plasma samples of gilthead seabream from the NET trial (n = 6). Blue, orange and red dots represent CTRL, NET2 and NET4 groups, respectively. c – Hierarchical clustering of 107 significantly differential proteins identified in the plasma samples of gilthead seabream from net handling (NET) trial. Rows represent expression patterns of individual proteins, while each column corresponds to a biological replicate (fish). Cell colour indicates the normalized Z-scores of the spot volumes Hierarchical clustering (HCA) and principal component (PCA) analyses were performed for the identified 107 proteins spots with differential relative abundance across NET groups to check how well the samples grouped based on the expression patterns of the protein spots. The PCA (Fig.4-b) showed two main clusters belonging to the control and NET4 samples, while 2 biological samples belonging to the NET2 group clustered together with the control samples and 1 with the NET4 samples. The 107 differential protein spots were centralized into two principal components (PC), PC1 and PC2, which represented the maximum variation (65.6%) and the next highest variation (5.5%), respectively. The HCA (Fig. 4-c) likewise revealed two main groups regarding the biological replicates, as observed by the top dendrogram. The protein spots were also grouped into two main clusters, one displaying a pattern of higher relative abundance and the other of lower relative abundance in stressed fish, when compared to the control. As described above for the PCA, higher variability in NET2 was also shown in the HCA. For the network and GO enrichment analyses the subset of 20 single protein identifications mentioned above was blasted against Danio rerio in the UniprotKB database. A PPI network (Fig.5-a) was generated on the STRING web tool revealing 61 edges among 18 nodes/proteins (2 proteins had no interaction with the main network), with a clustering coefficient of 0.677 and a very significant enrichment value (P < 1.0e− 16). The analysis was performed on Cytoscape and specific topological parameters were selected to demonstrate the importance and distribution of the nodes in the network: a darker colour intensity of the nodes indicates higher degree, while the size was estimated using the variation in protein abundance (fold-change). For every single entry, one protein spot was chosen as the most representative of each protein (Table 1), based mainly on the protein score and experimental molecular weight and pI close to the theoretical ones. From these 18 spots, 11 were down- and 7 were up-regulated, however, these differences in abundance were mostly significant (log-fold change > 1.0 or < − 1.0, q-value < 0.05) for the NET4 treatment (only 2 were exclusively significant for the NET2 treatment and 2 were significant for both treatments). Thus, the fold-change of these 18 spots between NET4 and CTRL groups was used to estimate the size of the nodes on the PPI network, which ranged from − 4.04 to + 2.78. SERPINC1 (antithrombin-III), TFA (transferrin) and FGA (fibrinogen alpha-chain) occupied the most central positions in the network having the highest number of interactions, while APOA1 (apolipoprotein A-I) showed the highest number of experimentally demonstrated interactions, mainly with APOEB (apolipoprotein Eb), APOBB (apolipoprotein B-100) and FGA. GO Enrichment analysis (Fig. 5-b) revealed 19 overrepresented (hypergeometric test, FDR < 0.05) GO Biological Process (BP) terms, mostly linked to the immune system and response to stimulus. No annotations were retrieved for alpha-1-antitrypsin, leucine-rich alpha-2-glycoprotein-like, apolipoprotein A-I, apolipoprotein B-100-like, haptoglobin, pentraxin and hyaluronic acid-binding protein 2. In the horizontal bar plot (Fig. 5-b), only the 9 most significant terms are represented. GO Molecular function enrichment analysis accounted for 8 terms with 5 main proteins (alpha-1-antitrypsin, antithrombin-III, inter-alpha-trypsin-inhibitor, kininogen and alpha-2-macroglobulin) while GO Cellular component revealed 4 enriched terms with 2 main proteins (fibrinogen alpha-chain and alpha-2-macroglobulin). A complete list of all GO terms is described on the additional file 3. A – Protein-protein interaction network generated with 18 differential proteins identified in the plasma of fish from NET trial. Nodes represent proteins and edges the functional associations between them. STRING annotations are described in Table 1. Red arrows represent up-regulated proteins in both treatments; blue arrows represent down-regulated proteins in both treatments. D – GO Enrichment analysis of the 18 proteins showing significantly differential abundance between control and NET treatments (hypergeometric test, FDR < 0.05) Table 1 String annotations and fold-changes of the proteins in the PPI network. Bold lettering in the "FC" column indicates significant fold-changes (> 1.0 and < − 1.0). List is given in ascending order of spot number In this study, the stress response of farmed Gilthead seabream adults to chronic stress conditions was primarily assessed by observing both changes in the concentration of routine plasma stress indicators, namely cortisol, glucose and lactate, and post-mortem biochemical parameters, explicitly pH and rigor mortis. To evaluate the existence of possible unbiased and reliable markers of chronic stress, proteomics was used to verify the potential of fish protein-based adaptations. Cortisol is the most commonly used physiological indicator of the primary response to stress [21]. However, it has been shown that this corticosteroid is not a reliable biomarker of long-term stress exposure [27, 35,36,37]. In this study, Gilthead seabream that endured high stocking densities over 54 days showed a possible reconfiguration of the cortisol response. This is supported by the observed downward trend of this metabolite, as compared to unstressed fish. This is suggestive of either a non-activated or an altered responsiveness of the HPI axis, which sometimes leads to the hyporeactivity of the corticosteroid response [38]. The same outcome was observed in juvenile Gilthead seabream confined for 14 days at 26 kg/m3 [39] and in meagre, cultured at different stocking densities for 40 days [40]. In the NET trial, however, apart from the wide dispersion of observations, plasma cortisol levels were significantly higher in handled fish. This result suggests that the fish were not able to appropriately adapt to the handling stressor. Its persistence, unpredictability and severity, could have prevented the possibility of habituation. Regarding the HYP trial, no effect of the 48 h of hypoxia was reflected in these fish. This suggests an acclimation to the low oxygen environment by a possible adjustment of the oxygen requirement (e.g. reduction of high energy behaviours). Overall, the aforementioned observations suggest that the cortisol response and the capacity of adaptation are modulated by the nature, duration and intensity of the stressor. However, other factors like species, age, sex and individual coping mechanisms seem to be ubiquitous and impact their adaptive processes [24, 38, 41]. This process of stress habituation was already suggested and demonstrated in other studies [37, 42], but this mechanism is not yet well-understood. High individual variability was also observed in each trial, most likely due to individual differences in the stress response related to intrinsic factors of the animal (e.g. coping styles, cognitive perception) [39, 43, 44]. Additionally, values registered for control fish, in every trial, are higher than the reference values reported in the literature for this species [45]. These discrepancies may have several causes, which is why cortisol should be used with caution when evaluating the magnitude of the stress response. Moreover, the difficulty of measuring resting cortisol levels is also acknowledged to be one of the causes of these discrepancies. The lack of proper planning when sampling cortisol, or the manipulation needed to net and anaesthetize the fish, can result in high "control" cortisol levels that do not correspond to the "genuine" basal levels i.e., the non-manipulated fish levels. Also, it is well-established that following the perception of an acute stressor, the levels of circulating stress markers increase within the first minutes or hours of stress response, returning to basal levels as time elapses, usually within 24 h [17, 46, 47]. Secondary physiological responses are characterized by an increase in glucose and lactate levels in blood plasma in order to satisfy the increased energy expenditure. Changes in glucose usually follow similar trends as cortisol after the stressor [14]. This is corroborated in this study by the levels of plasma glucose registered in all trials (Fig. 1). Glucose levels, besides following the same trend as cortisol levels, are, in general, below the basal values for this species [45]. This could be related to the fish's inability to maintain the same levels of glucose in the blood due to the high demand for glucose mobilization to other tissues. The decrease of plasma glucose levels in OC is in agreement with the decrease in the cortisol levels, supporting the hypothesis of habituation or exhaustion of the endocrine system [27]. The significant increases in the plasma glucose levels of stressed fish from NET and HYP trials are consistent with previous studies. These showed that glucose rises during air exposure or low oxygen levels, due to stimulation of muscle glycogenolysis and hepatic gluconeogenesis, where glucose is synthesized to maintain the energetic substrates' demand [48]. Similar to cortisol, glucose and lactate circulating levels also return to basal levels within hours post-stressor, which further makes of these metabolites unreliable markers in case of prolonged stressors [49, 50]. Additionally, studies also demonstrate that glucose variations in the blood are not only hormonal-induced due to stressful practices. Factors like variations in the water temperature and pH, anaesthesia, diet composition or fasting can also affect plasma glucose levels [51, 52]. When insufficient oxygen is available to maintain the aerobic ATP production, fish resort to anaerobic metabolism to meet cellular requirements. This shift consequently leads to lactate accumulation in the muscle [19, 53]. In this study, changes in the circulating lactate levels do not follow the same trends of cortisol and glucose variations. Statistically significant differences in the lactate levels were only observed in the OC trial. In this case, if the cortisol response is indeed lower due to HPI-axis acclimation, as suggested before, the lactate recycling rate in the hepatic glycogenolysis is reduced, explaining the significant plasma lactate increase in stressed fish. Additionally, previous studies show that during hypoxia or intense swimming activity, fish produce lactate in the muscle at a higher rate than it can be processed by other tissues [53]. Post-mortem muscle pH and rigor mortis have been used as tissue indicators of ante-mortem stress in numerous fish species [54,55,56]. After the fish death, both blood circulation and oxygen supply cease. The major source of ATP to the muscle is thus lost, since glycogen can no longer be oxidized. However, for a limited time after death, ATP in the muscle is maintained at a definite level by creatine kinase. Consequently, the depletion of ATP reserves stimulates the breakdown of glycogen by anaerobic glycolysis in the muscle, in order to maintain the energy expenditure. This process results in the accumulation of lactic acid, generating H+ ions and consequently lowering muscle pH [57]. Glycolysis continues until all glycogen is consumed or the glycolytic enzymatic system is made inactive by the low pH. Hence, the magnitude and rate of this pH fall depend on the fish's energy reserves prior to death. These energy reserves can be influenced by the intensity and duration of the stress while fish is alive. To our knowledge, no studies were performed with Gilthead seabream regarding the effects of long-term chronic stressors on the evolution of post-mortem biochemical processes in the muscle. Results from this study (Fig. 2) followed the same pH trends as previous studies on gilthead seabream [58, 59], however, comparing this with the existent studies on pre-slaughter stress [55, 60, 61], muscle pH values immediately after death are lower than the ones found in this study, suggesting that stress at slaughter was low in our fish. Poli et al. 2005 state that in cases of exposure to a chronic stressor for a long time before death, the lactic acid produced can be gradually cleared from the muscle, but simultaneously the energy sources, like glycogen, will likewise gradually become exhausted. Hence, when the fish is killed, muscle pH does not suffer a dramatic fall due to an early end of post-mortem anaerobic glycolysis caused by energy source scarcity. This might explain the significant differences found in the HYP trial, where the highest pH values were observed in the highly stressed fish (HYP15), suggesting that these fish had lower energy reserves. Nevertheless, pH values registered after the 24 HAD, in every treatment, are in agreement with the reported by previous studies in this species at the same sampling times [58, 62]. Rigor mortis is inextricably correlated with muscle ATP and the pH decline. The onset of rigor mortis occurs with ATP depletion. When ATP reaches low levels, actin and myosin in the muscle bind together, forming the actomyosin complex and causing stiffness of the fish body [63]. A strong relationship between low muscle pH immediately after death, and a rapid onset of the rigor state was demonstrated in a range of fish species [57, 60]. In this study, the evolution of rigor mortis (Fig. 2) was similar between treatments and significant differences were only found in the NET and HYP trials at 8, and at 8 and 24 HAD, respectively. A delayed onset was observed, starting between 2 and 6 HAD in every trial and reaching the maximum rigor index between 24 and 48 HAD. This delay is in agreement with the high muscle pH registered immediately after death, supporting the hypothesis of low energetic reserves in our fish at the time of death. Measuring glycogen and ATP content in the fish muscle and liver would be a complementary assessment to infer about the energetic reserves and corroborate our hypothesis. Plasma proteins were evaluated in this study since blood plasma is a very informative biological fluid, acting as a mirror of the physiological condition of the organism. Stress and stress-related hormones are recognized as modulators of the fish immune system [64], however, responses depend on the intensity and duration of the stressor. The innate immune system is a fundamental defence mechanism in fish [65]. The acute phase response is part of this system and it is mainly regulated by cytokines and glucocorticoids [66]. This response is characterized by the release of acute-phase proteins (APP), by the hepatocytes, into circulation [67]. APP can be classified as "positive" or "negative" depending on whether their plasma concentration increases or decreases during activation of this response [68]. The response profile of our fish demonstrated the same tendency of protein changes. In this study, the pattern of protein changes observed in the plasma indicate that the fish's immune system was affected mainly by net handling and hypoxia stressors. Nevertheless, net handling was shown to be the most impacting. The levels of 20 different plasma proteins (distributed by 56 significantly differential spots), all related with immunological processes, were shown to be modulated by repetitive net handling, compared to two proteins modulated by hypoxia. As mentioned previously, the same proteins were often detected from different spots on the 2D gels. Such a phenomenon may be due to existent isoforms or caused by adaptive changes of the proteome in an attempt to maintain cellular homeostasis. This adaptation may involve changes at the level of protein degradation, localization, function and activity – all of which can be modulated by post-translational modifications (PTMs) [69]. PTMs can regulate fundamental biochemical processes and be more energetically efficient than altering protein abundance, constituting potential interesting signatures of stress. Studies on PTMs in fish are still scarce. The changes detected in protein abundance (listed in additional file 2), along with the PPI network and GO enrichment analyses (Fig. 5) performed, confirmed the involvement of several components of the innate immune system in the physiological adaptation to these stressors. Proteins considered to be "positive" APP were likewise shown to be increased in abundance in the plasma of fish stressed by net handling (fibrinogen alpha-chain, complement component C3, haptoglobin, complement factor B, warm-temperature acclimation 65 kDa protein, alpha-1-antitrypsin), while proteins considered as "negative" were decreased (transferrin, inter-alpha-trypsin inhibitor, apolipoprotein A-I) [70]. A diverse number of proteins involved in the APR was also found previously to be modulated in chronically stressed gilthead seabream [71]. Apolipoprotein A-I (Apo-AI) was modulated only by net handling stress. Seventeen proteoforms were identified in the plasma proteome map, being mostly decreased in abundance when comparing with the unstressed fish. Apo-AI is the main protein constituent of the high-density lipoprotein (HDL), playing a role in lipid metabolism and participating in the reverse transport of cholesterol from tissues to the liver [72, 73]. Apo-AI was also found to be decreased in abundance in crowded Atlantic salmon [74]. In cod (Gadus morhua) it acted as a negative regulator of the complement system [75]. Other two apolipoproteins were also found to be down-regulated in the plasma of fish from NET2 and NET4 groups (Apolipoprotein Eb and apolipoprotein B-100). The complement system is an essential part of the innate immune system which can be activated through three pathways: the classical, alternative and lectin pathways [76]. Fish display a plethora of complement components, mainly complement component C3 (C3), which may present around five proteoforms in a single species [77]. C3 is one of the most abundant proteins in the plasma and plays a central role in the innate immune system, supporting the activation of all three pathways [76]. In this study, C3, identified in 5 proteoforms, and complement factor B (Bf), identified in 4, were found to be increased in abundance by net handling. Contrarily, C3 was down-regulated in fish exposed to low oxygen levels. Bf also plays a role in complement activation by acting as the catalytic subunit of C3 convertase, an enzyme responsible for the proteolytic cleavage of C3, in the classical and alternative pathways [76]. Several metal-binding proteins, existent in the plasma of vertebrates, can chelate iron, zinc and copper, which are essential elements for the virulence of bacteria [78]. Alpha-2-macroglobulin (A2M) is a multifunctional protein [79] found to be down-regulated in the plasma of fish submitted to handling stress. It is mostly known to act as a broad range serine proteinase inhibitor and to bind metal ions [78]. Contrarily, haptoglobin, which is also responsible for the sequestration of iron by binding to hemoglobin, was found to be increased in the plasma of handled fish. Similarly, warm-temperature acclimation-related 65 kDa protein (Wap65), which is involved in the scavenging of free heme [80], was increased in abundance by net handling and hypoxia stressors. Wap65 in fish is the homologue of mammalian hemopexin [81] and in most teleosts presents two proteoforms [82]. In this study, two spots were also matched to this protein suggesting the presence of these two proteoforms. Transferrin (Tf) decreased in abundance in the plasma of fish stressed by net handling. Tf is a plasma protein also capable of binding iron and an important constituent of the iron homeostasis [33]. In fish, antiproteases are important participants of the non-specific humoral immune defence mechanism [70]. A2M is an important factor in this mechanism. Alpha-1-antitrypsin is a serine protease inhibitor, up-regulated in net-handled fish, which is responsible for negatively regulating blood clotting molecules to prevent thrombosis [83]. Inter-alpha-trypsin inhibitor H3 is also a serine protease inhibitor, which was found to be down-regulated in the plasma of fish from NET groups. The same pattern of protein changes was verified for fetuin-B, a cysteine proteinase inhibitor recently described in teleosts [84]. Finally, fibrinogen alpha-chain, a beta-globulin involved in blood clotting, an integral part of innate immunity [83], was found to be up-regulated in the plasma of fish belonging to NET groups. In summary, the overall results suggest that physiological changes were higher in fish exposed to repeated handling, while mild and permanent stressors may allow the fish to refine their physiological processes and adapt to certain challenges. The variability in the response levels of cortisol, glucose and lactate, in fish from the same groups, alongside the possible adaptation suggested by the results, demonstrate that these indicators may not be the most robust in case of chronic stress monitoring. On the other hand, plasma proteomics allowed the detection of a cohesive network of protein changes associated with essential immunological pathways in stressed fish. These proteins will be useful in understanding the biological processes behind protein-based stress adaptation in fish and may, therefore, represent the first screening for potential biomarker candidates of chronic stress in gilthead seabream. This work is the first step for a more scientific and reliable assessment of fish welfare. A multidisciplinary approach, and the study of the stress response from the molecular to the behavioural level might just be the holistic approach needed to achieve such a goal. Gilthead seabream (Sparus aurata) were obtained from a commercial fish farm (Maresa, Mariscos de Estero S.A., Huelva, Spain) and kept under quarantine conditions for a 2-week period at the Ramalhete Research Station (CCMAR, University of Algarve, Faro, Portugal). The fish were then individually weighed and distributed among conical fiberglass tanks (500 L), according to the density requirements of each trial. The tanks were supplied with natural flow-through seawater from Ria Formosa, and kept under natural temperature (13.4 ± 2.2 °C) and photoperiod, salinity at 34.7 ± 0.8 ‰, and artificial aeration (dissolved oxygen above 5 mg. L − 1). Fish were fed by hand once a day, with a diet manufactured by AquaSoja Portugal, following the species' nutritional requirements. The study was performed in three separate trials: [1] Overcrowding (OC), [2] Net Handling (NET) and [3] Hypoxia (HYP), due to logistic issues. Each trial followed a 2-week acclimation period and the initial rearing density was established at 10 kg/m3 (except in the experimental groups of high stocking densities). In the OC trial, during the 54 days of experiment, fish (initial body weight (IBW) = 372.33 ± 6.55 g) were stressed using different high stocking densities, by increasing the number of fish in the tanks. Three different experimental groups were tested in triplicate: Control – 10 kg/m3 (OCCTRL), medium density – 30 kg/m3 (OC30), high density – 45 kg/m3 (OC45). The NET trial lasted for 45 days and the fish (IBW = 375.69 ± 11.88 g) were stressed by 1-min air exposure, using nets designed to fit inside the tanks and to be lifted to perform the stressful event. The experimental groups were established, in triplicate, as follows: Control – undisturbed fish (the net was also placed in the tanks but not lifted) – (NETCTRL), fish air-exposed twice a week (NET2x) and fish air-exposed four-times a week (NET4x). In the HYP trial, fish (IBW = 397.99 ± 16.56 g) were subjected to low levels of saturated oxygen, by injection of nitrogen in the water, for 48 h, according to the following experimental groups (in triplicate): Control – 100% saturated oxygen – (HYPCTRL), 30% saturated oxygen (HYP30) and 15% saturated oxygen (HYP15). Different trial times are due to differences in the nature and severity of the stressor, to which rearing protocols had to be adjusted accordingly. Sampling procedure Prior to the sampling day, fish were starved for 48 h to clean the digestive tract. Nine random fish per tank were lethally anaesthetized with tricaine methanesulfonate (MS-222; Sigma Aldrich, St. Louis, Missouri, USA) for the following sampling procedures: 3 fish for rigor mortis index assessment, 3 fish for muscle pH measurement and 6 fish for blood collection. Blood samples of approximately 2 ml were collected from the caudal vein with a heparinized syringe and immediately centrifuged at 2000 g for 20 min. Plasma samples were immediately frozen at − 80 °C until posterior analyses. Fish for the measurement of post-mortem biochemical changes (pH and rigor mortis) were stored in polystyrene boxes with ice during the sampling period (72 h). All fish were weighed and measured. Plasma stress indicators' measurement Plasma cortisol levels were quantified using a commercial Cortisol ELISA kit RE52061 (IBL International, Hamburg, Germany), following the manufacturer's instructions. Measurements were registered at 450 and 620 nm along with a prepared standard curve on a microplate reader Biotek Synergy 4 Hybrid Technology™ (Biotek Instruments Inc., Winooski, USA). Plasma glucose and lactate levels were assessed through commercial colorimetric kits (Spinreact, Girona, Spain), following the manufacturer's instructions. Biochemical and quality characterization of fish muscle Muscle pH measurements were performed (n = 3 per tank), using a waterproof pH spear for food testing (Oakton® Instruments, Nijkerk, Netherlands), in the dorsal muscle, at 0, 1, 2, 4, 6, 8, 24, 48 and 72 h after death (HAD), approximately 1–2 cm apart. At the same post-mortem periods, rigor mortis was assessed (n = 3 per tank) by the rigor index (RI), as previously described [85], using the formula: $$ \mathrm{RI}\ \left(\%\right)=\left[\left({\mathrm{L}}_0-{\mathrm{L}}_{\mathrm{t}}\right)/{\mathrm{L}}_0\right]\times \kern0.37em 100. $$ L0 (cm) refers to the vertical distance between the base of the caudal fin and the table surface (where the anterior half of the fish is placed), measured immediately after death, whereas Lt (cm) corresponds to the same distance, however at selected time intervals. Fish were carefully handled during the measurements to avoid any interference with the rigor onset. Protein labelling Plasma samples were diluted 80x in DIGE buffer (7 M urea, 2 M thiourea, 4% CHAPS, 30 mM Tris pH 8.5) and the protein content measured with Bradford assay using the BioRad Quick Start Bradford Dye Reagent 1X (Bio-Rad Laboratories, Hercules, California, USA) and bovine serum albumin (BSA) as standard, BioRad Bovine Serum Albumin Standard Set (Bio-Rad Laboratories, Hercules, California, USA). Samples' pH was checked with a pH-indicator paper, Sigma-P4536 (Sigma Aldrich, St. Louis, Missouri, USA) and adjusted to 8.5 using 0.1 M NaOH. DIGE minimal labelling of 50 μg of protein was carried out using the CyDye™ DIGE fluor minimal labelling kit 5 nmol (GE Healthcare, Little Chalfont, UK), with 400 pmol fluorescent amine reactive cyanine dyes freshly dissolved in anhydrous dimethylformamide (DMF), following the manufacturer's instructions. Labelling was achieved on ice for 30 min, in the dark, and the reaction quenched with 1 mM of lysine for 10 min. For each trial, six samples per experimental condition were labelled with Cy3 and six with Cy5 to reduce the impact of label difference, while an internal standard consisting of a pool of all samples, with equal amounts, was labelled with Cy2. Samples were randomly sorted to avoid labelling bias. Protein separation by 2DE For each strip, 150 μg of protein (50 μg from each dye) were loaded along with rehydration buffer (8 M urea, 2% CHAPS, 50 mM DTT, 0.001% bromophenol blue, 0.5% Bio-lyte 3/10 ampholyte (Bio-Rad Laboratories, Hercules, California, USA) to complete 450 μl. Passive rehydration was conducted for 15 h on 24 cm Immobiline™ Drystrips (GE Healthcare, Little Chalfont, UK) with linear pH 4–7, on an IPG Box (GE Healthcare, Little Chalfont, UK). Following, isoelectric focusing (IEF) was performed in 5 steps: 500 V gradient 1 h, 500 V step-n-hold 1 h, 1000 V gradient 1 h, 8000 V gradient 3 h and 8000 V step-n-hold 5 h40 for a total of 60.000 Vhr using Ettan IPGphor at 20 °C (GE Healthcare, Little Chalfont, UK). Focused strips were reduced and alkylated with 6 ml of equilibration buffer (50 mM Tris-HCl pH 8.8, 6 M urea, 30% (v/v) glycerol and 2% SDS) with 1% (w/v) dithiothreitol (DTT) or 2.5% (w/v) iodoacetamide (IAA) respectively for 15 min each, in constant agitation. Strips were then loaded onto 12.5% Tris-HCl SDS-PAGE gels and ran in an Ettan DALTsix Large Vertical System (GE Healthcare, Little Chalfont, UK) at 10 mA/gel for 1 h followed by 60 mA/gel until the bromophenol blue line reaches the end of the gel, using a standard Tris-Glycine-SDS running buffer. Image acquisition and analysis CyDye-labeled gels were scanned on a Typhoon™ laser scanner 9400 (GE Healthcare, Little Chalfont, UK) at 100 μm resolution, with the appropriate laser filters for the excitation and emission wavelengths of each dye (i.e., Cy2–488/520 nm; Cy3–532/580 nm; and Cy5–633/670 nm), according to the manufacturer's recommendations. The voltages of the Photo Multiplier Tube (PMT) were adjusted to obtain a maximum image quality with minimal signal saturation and clipping. Gel images were checked for saturation during the acquisition process using the ImageQuant TL software (GE Healthcare, Little Chalfont, UK). The final images were analysed with SameSpots software (Totallab, Newcastle, UK), including background subtraction (average normalized volume ≤ 100,000 and a spot area ≤ 500), filtering, spot detection, spot matching, normalization and statistical analysis. Spot volume ratios that showed a statistically significant difference (abundance variation of at least 1.0-fold, P < 0.05 - one-way ANOVA on log2-transformed normalized spot volumes) were processed for further analysis. Protein spots with statistically different intensities were manually excised from preparative gels and identified by matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry (MALDI-TOF/TOF MS). Protein identification by MALDI-TOF/TOF MS Spots from SYPRO® Ruby-stained (InvitrogenTM, Carlsbad, CA, USA) gilthead seabream plasma 2D gels were picked and subjected to in-gel tryptic digestion, similar as reported before [86]. In this study, gel plugs were washed twice with 50 mM ammonium bicarbonate solution in 50% v/v methanol (MeOH) for 20 min and dehydrated twice for 20 min in 75% acetonitrile (ACN). Proteins were then digested with 8 μL of a solution containing 5 ng/μL trypsin (trypsin Gold, Promega) in 20 mM ammonium bicarbonate for 6 h at 37 °C. A 0.1% trifluoroacetic acid (TFA) solution in 50% ACN and a solution of 7 mg/mL α-cyano-4-hydroxycinnamic acid (CHCA) in 50% ACN/0.1% TFA were used for peptide extraction and spotting respectively. MALDI TOF/TOF analysis was performed with a TOF/TOF™ 5800 (AB SCIEX, Redwood City, CA, USA) mass spectrometer in MS and MS/MS mode. For each spot, the 10 most intense peaks of the MS spectrum were selected for MS/MS acquisition. Database interrogation was carried out over with ProteinPilot v4.5 (AB Sciex) on an in-house Mascot server version 2.6.1 (Matrix Science Ltd., London, UK). Mass lists were searched against NCBInr database restricted to the taxonomy "other Actinopterygii" (tax ID 7898 excluding 31,033 and 7955) with the following parameters: maximum 2 missed cleavages by trypsin, peptide mass tolerance ±100 ppm, fragment mass tolerance set to 0.5 Da, carbamidomethylation of cysteine selected as fixed modification and tryptophan dioxidation, histidine, tryptophan and methionine oxidation, and tryptophan to kynurenine as variable modifications. Protein hits not satisfying a significance threshold (P < 0.05 and a total ion score > 60) were further searched against vertebrate EST (expressed sequence tags) database also restricted to the taxonomy "other Actinopterygii". Protein-protein interaction (PPI) network and gene ontology (GO) enrichment analyses The theoretical molecular masses and isoelectric points (pI) of the MS identified proteins were calculated using the amino-acid sequences (in one-letter code) on the ProtParam Tool (http://us.expasy.org/tools/protparam.html). A significance cutoff was applied for the identified proteins at log-fold change ±1.0. Following, the identified proteins were blasted against Danio rerio, on the UniprotKB database, using the FASTA protein sequences as queries. The orthologues were mapped using STRING web tool v11.0 (https://string-db.org/) to screen for protein-protein interactions (PPI). Gene ontology (GO) enrichment analysis and network visualization and analysis were performed on Cytoscape v3.7.1 (http://www.cytoscape.org/) with the BiNGO plug-in. Important hub proteins were screened by counting the degree of connectivity of each node in the network. Over-represented GO terms were identified, using B. rerio as reference, by selecting the hypergeometric test with a significance threshold of 0.05 after Benjamini & Hochberg FDR correction. All univariate and multivariate statistical analyses were performed using R v3.5.3 for MacOSX (https://www.r-project.org). Statistical analyses of the plasma parameters and the post-mortem muscle biochemical changes were performed using plasma cortisol, glucose and lactate levels, muscle pH and rigor index as dependent variables, and the stress treatment as factor. Statistical differences between treatments were analysed independently for each trial (OC, NET and HYP). For rigor index and muscle pH, data were processed separately for each sampling time. Differences in plasma and muscle parameters between treatments were assessed by a one-way analysis of variance (one-way ANOVA) on log10-transformed data, except for rigor mortis data, which was transformed by arcsine square root. Multiple comparisons were carried out by the post-hoc Tukey HSD test. When transformed data failed the Shapiro-Wilk normality test, the non-parametric Kruskal-Wallis on ranks was used, followed by Dunn's test. When transformed data did not verified homoscedasticity assumption by Levene's test, statistical significance was analysed by Welch's ANOVA, followed by Games-Howell. A significance level of α = 0.05 was used in all tests performed. Experimental data is expressed as mean ± standard deviation (SD). Principal component analysis (PCA) and hierarchical clustering analysis of the identified proteins were performed on the log2-transformed normalized spot volumes obtained from SameSpots software, with autoscaling. Heatmap was generated by comparing Z-scores of normalized spot volumes and hierarchical clustering of samples and protein spots was performed using the Euclidean distance and the maximum cluster agglomeration method as distance metrics. The authors declare that all relevant data supporting the findings of this study are available within the article (and its additional files). Analysis of variances DIGE: Differential Gel Electrophoresis FC: Fold-change FDR: False discovery rate HAD: Hours after death HCA: Hierarchical clustering analysis HYP: IEF: Isoelectric focusing MALDI-TOF/TOF: Matrix-assisted-laser-desorption-ionization time-of-flight/time-of-flight Net handling Overcrowding PCA: pI: PPI: Protein-protein interaction RI: Rigor index Huntingford FA, Adams C, Braithwaite VAA, Kadri S, Pottinger TG, Sandoe P, et al. Current issues in fish welfare. J Fish Biol. 2006;68(2):332–72. Branson EJ. Fish welfare. Branson EJ, editor. Oxford, UK: Blackwell Publishing Ltd; 2008. 300 p. Carenzi C, Verga M. Animal welfare: review of the scientific concept and definition. Ital J Anim Sci. 2009;8(sup1):21–30. Braithwaite VA, Ebbesson LO. Pain and stress responses in farmed fish. Rev Sci Tech. 2014;33:245–53. CAS PubMed Article PubMed Central Google Scholar Cerqueira M, Millot S, Castanheira MF, Félix AS, Silva T, Oliveira GA, et al. Cognitive appraisal of environmental stimuli induces emotion-like states in fish. Sci Rep. 2017;7:13181. Maria Filipa C, Luís CE, Sandie M, Stephanie R, Marie-Laure B, Børge D, et al. Coping styles in farmed fish : consequences for aquaculture. Rev Aquac. 2017;9(1):23–41. Rose JD, Arlinghaus R, Cooke SJ, Diggles BK, Sawynok W, Stevens ED, et al. Can fish really feel pain? Fish Fish. 2012;15:97–133. Conte FS. Stress and the welfare of cultured fish. Appl Anim Behav Sci. 2004;86(3–4):205–23. Selye H. Stress and the general adaptation syndrome. Br Med J. 1950;1(4667):1383–92. Schreck CB. Stress and fish reproduction: the roles of allostasis and hormesis. Gen Comp Endocrinol. 2010;165:549–56. Korte SM, Olivier B, Koolhaas JM. A new animal welfare concept based on allostasis. Physiol Behav. 2007;92(3):422–8. Ashley PL. Fish welfare: current issues in aquaculture. Appl Anim Behav Sci. 2007;104:199–235. Mommsen TP, Vijayan MM, Moon TW. Cortisol in teleosts: dynamics, mechanisms of action, and metabolic regulation. Rev Fish Biol Fish. 1999;9:211–68. Wendelaar Bonga SE. The stress response in fish. Physiol Rev. 1997;77:591–625. Pottinger TG. The stress response in fish-mechanisms, effects and measurement. In: Branson EJ, editor. Fish welfare. Oxford, UK: Blackwell Publishing Ltd; 2008. p. 32–48. Fabbri E, Moon TW. Adrenergic signaling in teleost fish liver, a challenging path. Comp Biochem Physiol Part B Biochem Mol Biol. 2016;199:74–86. Vijayan MM, Aluru N, Leatherland JF. Stress response and the role of cortisol. In: Leatherland JF, Woo P, editors. Fish diseases and disorders, Vol 2: non-infectious disorders. Oxfordshire: CAB International; 2010. p. 182–201. Milligan CL, Girard SS. Lactate metabolism in rainbow trout. J Exp Biol. 1993;180:175–93. Wood CM, Turner JD, Graham MS. Why do fish die after severe exercise? J Fish Biol. 1983;22:189–201. Boonstra R. Reality as the leading cause of stress: rethinking the impact of chronic stress in nature. Fox C, editor. Funct Ecol. 2013;27(1):11–23. Ellis T, Yildiz HY, López-Olmeda J, Spedicato MT, Tort L, Øverli Ø, et al. Cortisol and finfish welfare. Fish Physiol Biochem. 2012;38(1):163–88. Bonier F, Martin PR, Moore IT, Wingfield JC. Do baseline glucocorticoids predict fitness? Trends Ecol Evol. 2009;24(11):634–42. PubMed Article PubMed Central Google Scholar Davis KB Jr, McEntire ME. Comparison of the cortisol and glucose stress response to acute confinement and resting insulin-like growth factor-I concentrations among white bass, striped bass and sunshine bass. Aquac Am B Abstr. 2006;79. Fast MD, Hosoya S, Johnson SC, Afonso LOB. Cortisol response and immune-related effects of Atlantic salmon (Salmo salar Linnaeus) subjected to short- and long-term stress. Fish Shellfish Immunol. 2008;24(2):194–204. Koakoski G, Oliveira TA, da Rosa JGS, Fagundes M, Kreutz LC, Barcellos LJG. Divergent time course of cortisol response to stress in fish of different ages. Physiol Behav. 2012;106(2):129–32. Madaro A, Fernö A, Kristiansen TS, Olsen RE, Gorissen M, Flik G, et al. Effect of predictability on the stress response to chasing in Atlantic salmon (Salmo salar L.) parr. Physiol Behav. 2016;153:1–6. Martinez-Porchas M, Martinez-Cordova LR, Ramos-Enriquez R. Cortisol and glucose: reliable indicators of fish stress? Panam J Aquat Sci. 2009;4(2):158–78. Marco-Ramell A, de Almeida AM, Cristobal S, Rodrigues P, Roncada P, Bassols A, et al. Proteomics and the search for welfare and stress biomarkers in animal production in the one-health context. Mol BioSyst. 2016;12(7):2024–35. Almeida AM, Bassols A, Bendixen E, Bhide M, Ceciliani F, Cristobal S, et al. Animal board invited review: advances in proteomics for animal and food sciences. Animal. 2014;9(1):1–17. Cordeiro OD, Silva TS, Alves RN, Costas B, Wulff T, Richard N, et al. Changes in liver proteome expression of Senegalese sole (Solea senegalensis) in response to repeated handling stress. Mar Biotechnol. 2012;14(6):714–29. Alves RN, Cordeiro O, Silva TS, Richard N, de Vareilles M, Marino G, et al. Metabolic molecular indicators of chronic stress in gilthead seabream (Sparus aurata) using comparative proteomics. Aquaculture. 2010;299(1–4):57–66. Brunt J, Hansen R, Jamieson DJ, Austin B. Proteomic analysis of rainbow trout (Oncorhynchus mykiss, Walbaum) serum after administration of probiotics in diets. Vet Immunol Immunopathol. 2008;121(3–4):199–205. Sanahuja I, Ibarz A. Skin mucus proteome of gilthead sea bream: a non-invasive method to screen for welfare indicators. Fish Shellfish Immunol. 2015;46(2):426–35. Metzger DCH, Hemmer-Hansen J, Schulte PM. Conserved structure and expression of hsp70 paralogs in teleost fishes. Comp Biochem Physiol Part D Genomics Proteomics. 2016 Jun;18:10–20. Ellis T, North B, Scott AP, Bromage NR, Porter M, Gadd D. The relationships between stocking density and welfare in farmed rainbow trout. J Fish Biol. 2002;61(3):493–531. Naderi M, Keyvanshokooh S, Salati AP, Ghaedi A. Effects of chronic high stocking density on liver proteome of rainbow trout (Oncorhynchus mykiss). Fish Physiol Biochem. 2017;43(5):1373–85. Zahedi S, Akbarzadeh A, Mehrzad J, Noori A, Harsij M. Effect of stocking density on growth performance, plasma biochemistry and muscle gene expression in rainbow trout (Oncorhynchus mykiss). Aquaculture. 2019;498:271–8. Rotllant J, Arends RJ, Mancera JM, Flik G, Wendelaar Bonga SE, Tort L. Inhibition of HPI axis response to stress in gilthead sea bream (Sparus aurata) with physiological plasma levels of cortisol. Fish Physiol Biochem. 2000;23(1):13–22. Barton BA, Ribas L, Acerete L, Tort L. Effects of chronic confinement on physiological responses of juvenile gilthead sea bream, Sparus aurata L., to acute handling. Aquac Res. 2005;36(2):172–9. Millán-Cubillo AF, Martos-Sitcha JA, Ruiz-Jarabo I, Cárdenas S, Mancera JM. Low stocking density negatively affects growth, metabolism and stress pathways in juvenile specimens of meagre (Argyrosomus regius, Asso 1801). Aquaculture. 2016;451:87–92. Houslay TM, Earley RL, Young AJ, Wilson AJ. Habituation and individual variation in the endocrine stress response in the Trinidadian guppy (Poecilia reticulata). Gen Comp Endocrinol. 2019;270:113–22. Tort L, Montero D, Robaina L, Fernández-Palacios H, Izquierdo MS. Consistency of stress response to repeated handling in the gilthead sea bream Sparus aurata Linnaeus, 1758. Aquac Res. 2001;32(7):593–8. Barton BA. Stress in fishes: a diversity of responses with particular reference to changes in circulating corticosteroids. Integr Comp Biol. 2002;42(3):517–25. Castanheira MF, Conceição LEC, Millot S, Rey S, Bégout M-L, Damsgård B, et al. Coping styles in farmed fish: consequences for aquaculture. Rev Aquac. 2015;9(1):23–41. Yildiz HY. Reference biochemical values for three cultured Sparid fish: Striped Sea bream, Lithognathus mormyrus; common dentex, Dentex dentex; and gilthead sea bream, Sparus aurata. Comp Clin Path. 2009;18(1):23–7. Fanouraki E, Mylonas CC, Papandroulakis N, Pavlidis M. Species specificity in the magnitude and duration of the acute stress response in Mediterranean marine fish in culture. Gen Comp Endocrinol. 2011;173(2):313–22. Naderi M, Keyvanshokooh S, Ghaedi A, Salati AP. Effect of acute crowding stress on rainbow trout (Oncorhynchus mykiss): a proteomics study. Aquaculture. 2018;495:106–14. Omlin T, Weber J. Hypoxia stimulates lactate disposal in rainbow trout. J Exp Biol. 2010;213:3802–9. Gesto M, Lopez-Patino MA, Hernandez J, Soengas JL, Miguez JM. The response of brain serotonergic and dopaminergic systems to an acute stressor in rainbow trout: a time course study. J Exp Biol. 2013;216(23):4435–42. López-Patiño MA, Hernández-Pérez J, Gesto M, Librán-Pérez M, Míguez JM, Soengas JL. Short-term time course of liver metabolic response to acute handling stress in rainbow trout, Oncorhynchus mykiss. Comp Biochem Physiol Part A Mol Integr Physiol. 2014;168:40–9. Gesto M, Soengas JL, Miguez JM. Acute and prolonged stress responses of brain monoaminergic activity and plasma cortisol levels in rainbow trout are modified by PAHs (naphthalene, b-naphthoflavone and benzo(a)pyrene) treatment. Aquat Toxicol. 2008;86:341–51. Polakof S, Panserat S, Soengas JL, Moon TW. Glucose metabolism in fish: a review. J Comp Physiol B. 2012;182(8):1015–45. Weber J-M, Choi K, Gonzalez A, Omlin T. Metabolic fuel kinetics in fish: swimming, hypoxia and muscle membranes. J Exp Biol. 2016;219:250–8. Acerete L, Reig L, Alvarez D, Flos R, Tort L. Comparison of two stunning/slaughtering methods on stress response and quality indicators of European sea bass (Dicentrarchus labrax). Aquaculture. 2009;287(1–2):139–44. Bahuaud D, Mørkøre T, Østbye T-K, Veiseth-Kent E, Thomassen MS, Ofstad R. Muscle structure responses and lysosomal cathepsins B and L in farmed Atlantic salmon (Salmo salar L.) pre- and post-rigor fillets exposed to short and long-term crowding stress. Food Chem. 2010;118(3):602–15. Poli BM, Parisi G, Scappini F, Zampacavallo G. Fish welfare and quality as affected by pre-slaughter and slaughter management. Aquac Int. 2005;13:29–49. Bagni M, Civitareale C, Priori A, Ballerini A, Finoia M, Brambilla G, et al. Pre-slaughter crowding stress and killing procedures affecting quality and welfare in sea bass (Dicentrarchus labrax) and sea bream (Sparus aurata). Aquaculture. 2007;263:52–60. Matos E, Silva TS, Wulff T, Valente LMP, Sousa V, Sampaio E, et al. Influence of supplemental maslinic acid (olive-derived triterpene) on the post-mortem muscle properties and quality traits of gilthead seabream. Aquaculture. 2013;396–399:146–55. Silva TT, Matos E, Cordeiro OD, Colen R, Wulff T, Sampaio E, et al. Dietary Tools To Modulate Glycogen Storage in Gilthead Seabream Muscle: Glycerol Supplementation. 2012;. Wilkinson RJ, Paton N, Porter MJR. The effects of pre-harvest stress and harvest method on the stress response, rigor onset, muscle pH and drip loss in barramundi (Lates calcarifer). Aquaculture. 2008;282(1–4):26–32. Matos E, Gonçalves A, Nunes ML, Dinis MT, Dias J. Effect of harvesting stress and slaughter conditions on selected flesh quality criteria of gilthead seabream (Sparus aurata). Aquaculture. 2010;305(1–4):66–72. Ayala MD, Abdel I, Santaella M, Martínez C, Periago MJ, Gil F, et al. Muscle tissue structural changes and texture development in sea bream, Sparus aurata L., during post-mortem storage. LWT - Food Sci Technol. 2010;43(3):465–75. Delbarre-Ladrat C, Chéret R, Taylor R, Verrez-Bagnis V. Trends in postmortem aging in fish: understanding of proteolysis and disorganization of the myofibrillar structure. Crit Rev Food Sci Nutr. 2006;46(5):409–21. Eslamloo K, Akhavan SR, Fallah FJ, Henry MA. Variations of physiological and innate immunological responses in goldfish (Carassius auratus) subjected to recurrent acute stress. Fish Shellfish Immunol. 2014;37(1):147–53. Bayne CJ, Gerwick L. The acute phase response and innate immunity of fish. Dev Comp Immunol. 2001;25(8–9):725–43. Cray C, Zaias J, Altman NH. Acute phase response in animals: a review. Comp Med. 2009;59(6):517–26. Charlie-Silva I, Klein A, Gomes JMM, Prado EJR, Moraes AC, Eto SF, et al. Acute-phase proteins during inflammatory reaction by bacterial infection: fish-model. Sci Rep. 2019;9(1):4776. Gabay C, Kushner I. Acute-phase proteins and other systemic responses to inflammation. Epstein FH, editor. N Engl J Med. 1999;340(6):448–54. Karve TM, Cheema AK. Small changes huge impact: the role of protein posttranslational modifications in cellular homeostasis and disease. J Amino Acids. 2011;2011:207691. Roy S, Kumar V, Kumar V, Behera BK. Acute phase proteins and their potential role as an Indicator for fish health and in diagnosis of fish diseases. Protein Pept Lett. 2016;24(1):78–89. Pérez-Sánchez J, Terova G, Simó-Mirabet P, Rimoldi S, Folkedal O, Calduch-Giner JA, et al. Skin Mucus of Gilthead Sea Bream (Sparus aurata L.). Protein Mapping and Regulation in Chronically Stressed Fish. Front Physiol. 2017;8. Concha MI, Molina S, Oyarzún C, Villanueva J, Amthauer R. Local expression of apolipoprotein A-I gene and a possible role for HDL in primary defence in the carp skin. Fish Shellfish Immunol. 2003;14(3):259–73. Piñeiro M, Piñeiro C, Carpintero R, Morales J, Campbell FM, Eckersall PD, et al. Characterisation of the pig acute phase protein response to road transport. Vet J. 2007;173(3):669–74. PubMed Article CAS PubMed Central Google Scholar Veiseth-Kent E, Grove H, Færgestad EM, Fjæra SO. Changes in muscle and blood plasma proteomes of Atlantic salmon (Salmo salar) induced by crowding. Aquaculture. 2010;309(1–4):272–9. Magnadóttir B, Lange S. Is Apolipoprotein A-I A regulating protein for the complement system of cod (Gadus morhua L.)? Fish Shellfish Immunol. 2004;16(2):265–269. Boshra H, Li J, Sunyer JO. Recent advances on the complement system of teleost fish. Fish Shellfish Immunol. 2006;20(2):239–62. Sunyer JO, Tort L, Lambris JD. Structural C3 diversity in fish: characterization of five forms of C3 in the diploid fish Sparus aurata. J Immunol. 1997;158(6):2813–21. Porcheron G, Garénaux A, Proulx J, Sabri M, Dozois CM. Iron, copper, zinc, and manganese transport and regulation in pathogenic Enterobacteria: correlations between strains, site of infection and the relative importance of the different metal transport systems for virulence. Front Cell Infect Microbiol. 2013;3:90. Funkenstein B, Rebhan Y, Dyman A, Radaelli G. α2-macroglobulin in the marine fish Sparus aurata. Comp Biochem Physiol Part A Mol Integr Physiol. 2005;141(4):440–9. Dietrich MA, Hliwa P, Adamek M, Steinhagen D, Karol H, Ciereszko A. Acclimation to cold and warm temperatures is associated with differential expression of male carp blood proteins involved in acute phase and stress responses, and lipid metabolism. Fish Shellfish Immunol. 2018;76:305–15. Kinoshita S, Itoi S, Watabe S. cDNA cloning and characterization of the warm-temperature-acclimation-associated protein Wap65 from carp, Cyprinus carpio. Fish Physiol Biochem. 2001;24(2):125–34. Diaz-Rosales P, Pereiro P, Figueras A, Novoa B, Dios S. The warm temperature acclimation protein (Wap65) has an important role in the inflammatory response of turbot (Scophthalmus maximus). Fish Shellfish Immunol. 2014;41(1):80–92. Rebl A, Goldammer T. Under control: the innate immunity of fish from the inhibitors' perspective. Fish Shellfish Immunol. 2018;77:328–49. Li C, Gao C, Fu Q, Su B, Chen J. Identification and expression analysis of fetuin B (FETUB) in turbot (Scophthalmus maximus L.) mucosal barriers following bacterial challenge. Fish Shellfish Immunol. 2017;68:386–94. Erikson, U. Rigor measurements. In: S.C. Kestin, P. D. Warriss, eds. Farmed fish quality. Oxford, UK: Blackwell Science; 2001. p. 283–297. Schiener M, Hilger C, Eberlein B, Pascal M, Kuehn A, Revets D, et al. The high molecular weight dipeptidyl peptidase IV pol d 3 is a major allergen of Polistes dominula venom. Sci Rep. 2018;8(1):1–10. This study received Portuguese national funds from FCT - Foundation for Science and Technology through project UIDB/04326/2020 and project WELFISH (Refª 16–02-05-FMP-12, "Establishment of Welfare Biomarkers in farmed fish using a proteomics approach") financed by Mar2020, in the framework of the program Portugal 2020. Cláudia Raposo de Magalhães acknowledges an FCT PhD scholarship, Refª SFRH/BD/138884/2018. Denise Schrama acknowledges an FCT PhD scholarship, Refª SFRH/BD/136319/2018. Centre of Marine Sciences, CCMAR, Universidade do Algarve, Campus de Gambelas, Edifício 7, 8005-139, Faro, Portugal Cláudia Raposo de Magalhães, Denise Schrama, Ana Paula Farinha, Pedro Miguel Rodrigues & Marco Cerqueira Luxembourg Institute of Health, Department of Infection and Immunity, 29, rue Henri Koch, L-4354, Esch-sur-Alzette, Luxembourg Dominique Revets & Annette Kuehn Luxembourg Institute of Science and Technology, Environmental Research and Innovation (ERIN) Department, 5, avenue des Hauts-Fourneaux, L-4362, Esch-sur-Alzette, Luxembourg Sébastien Planchon Cláudia Raposo de Magalhães Denise Schrama Ana Paula Farinha Dominique Revets Annette Kuehn Pedro Miguel Rodrigues Marco Cerqueira CRM carried out the animal experiments, the biochemical and the proteomics analyses, performed the statistical analyses and wrote the manuscript. DS assisted with the animal experiments and the proteomics analyses. APF advised and assisted on the data bioinformatic analyses. DR, AK and SP performed the mass spectrometry analyses and participated in the interpretation. PMR designed the experiments and was a major contribution to the paper's writing. MC designed and assisted with the experiments and was a major contribution to the paper's writing. All authors provided a critical review and approved the final manuscript. Correspondence to Marco Cerqueira. This study was approved by the ORBEA Animal Welfare Committee of CCMAR and the Portuguese National Authority for the Animal Health (DGAV) on August 26th 2019. The experiment described was conducted in accordance with the European guidelines on the protection of animals used for scientific purposes (Directive 2010/63/EU) and the Portuguese legislation for the use of laboratory animals, under a "Group-1" license (permit number 0420/000/000-n.99–09/11/2009) from the Veterinary Medicine Directorate, the competent Portuguese authority for the protection of animals, Ministry of Agriculture, Rural Development and Fisheries, Portugal and following category C FELASA recommendations. Additional file 1. Growth performance of gilthead seabream (Sparus aurata) submitted to three different chronic stressors. Values are mean ± SD (n = 75). Protein spots, with statistically different relative abundances (P < 0.05), identified in gilthead seabream (Sparus aurata) blood plasma proteome from NET and HYP trials, by MALDI-TOF/TOF MS after separation by 2D-DIGE. List is given in ascending order of spot number. List of the overrepresented terms in the GO Enrichment analysis of the 18 proteins showing significantly differential abundance between control and NET treatments (hypergeometric test, FDR < 0.05). Raposo de Magalhães, C., Schrama, D., Farinha, A.P. et al. Protein changes as robust signatures of fish chronic stress: a proteomics approach to fish welfare research. BMC Genomics 21, 309 (2020). https://doi.org/10.1186/s12864-020-6728-4 Mass-spectrometry
CommonCrawl
Lewis Structures - Peripheral Octets 1) Which Lewis structure is preferential of the two below? EN considerations suggest the first structure. Size/charge density suggests the second one; the second structure would likely be more stable because if we think of the halide series of binary acids - the bromide anion is more stable than the chloride anion - and hence hydrobromic acid is a stronger acid than hydrochloric acid - the sheer size of the bromide ion is able to effectively dilute its charge and overcome its lack in electronegativity. 2) I am told that peripheral atoms should have octets. Why is this so? What is the physical reasoning behind this? Does this have anything to do with minimizing the overall potential energy of the molecule - i.e. the peripheral atoms are perhaps more subject to attack than internal atoms (due to steric factors) - so molecules try to minimize such a possibility by completing the octet for peripheral atoms? On the other hand, the octet rule is more of a guideline rather than a rule. 3) What is the bond-angle of the Cl-Br-Cl bond? I am told that the bond angle is a perfect 180 degrees. What about the first structure? If we consider the first structure a resonance contributor, wouldn't that lead to a distorted/non-linear bond angle? Or would that still lead to a perfect 180 degree bond angle, because to me, it seems that all the lone pairs on the bromine can participate in resonance (if not all at once). lewis-structure DissenterDissenter $\begingroup$ Aren't the 2 resonance structures you've drawn the same? $\endgroup$ – ron Jul 15 '14 at 0:10 $\begingroup$ Yes, they are the same. $\endgroup$ – Dissenter Jul 15 '14 at 0:55 $\begingroup$ Since they are the same, what do you mean by, "Which Lewis structure is preferential of the two"? $\endgroup$ – ron Jul 15 '14 at 1:06 $\begingroup$ No, the pictures in the top row have a double-bonded chlorine, while the picture in the bottom row has no double-bonded chlorine. $\endgroup$ – Dissenter Jul 15 '14 at 1:11 Quoting Greenwood & Earnshaw's Chemistry of the Elements, discussing the triatomic polyhalide anions: [I]nteratomic distances [...] are individually always substantially greater than for the corresponding diatomic interhalogen. Unfortunately, the following data table does not contain an actual value for the bond length, though it does indicate that the two $\ce{Br-Cl}$ bonds are of equal length (which we probably could've safely assumed from symmetry considerations and the energetic degeneracy of the orbitals of the two chlorine atoms, if nothing else), and that the $\ce{Cl-Br-Cl}$ bond angle is (approximately) $180^{\circ}$. Bond length can often be correlated to both bond strength and bond multiplicity, especially when comparing compounds of the same atoms. Since the bonds in $\ce{BrCl2-}$ are longer than those in $\ce{BrCl}$, it's reasonable to infer that they are also weaker, and do not have greater multiplicity (i.e., there is no additional $\pi$-bonding). I would further reason that any $\pi$-orbital overlap would have to be very poor (judging by the mismatch in both size and energy of the $3p$ and $4p$ orbitals of chlorine and bromine, respectively), which makes $\pi$-bonding unlikely. (For the sake of completeness, I would note that I could be mistaken here, since $4p$ orbitals are somewhat smaller and lower in energy than expected, given the poor shielding of the nucleus by $3d$ electrons. That said, I don't think the magnitude of this effect is large enough to make a major difference here.) A full quantitative molecular orbital theory treatment of the molecule would require computation, and I wasn't able to find many useful literature references. However, as the molecule is relatively simple, we can probably make reasonable progress with even a very naive analysis. On the valence level(s), we have a total of $22$ electrons in $12$ twelve atomic orbitals, all $s$ or $p$. These will combine to form twelve molecular orbitals, eleven of which are going to be doubly occupied. Note that for any bonding MO generated, an anti-bonding MO will also be generated. Hence, even if we had six bonding MOs (and, in reality, some would almost certainly be non-bonding), we would have five anti-bonding MOs occupied as well. This would yield an overall bond order of $1$, indicating a bond order of $\frac{1}{2}$ for each individual bond. By analogy to similar (but more common and hence extensively studied) polyhalogen anions (e.g., $\ce{I3-}$), this conclusion seems justified. $\ce{I3-}$, and numerous similar isoelectronic molecules, are often described using the three-center four-electron (3c4e) bond, and I think this model is probably applicable to $\ce{BrCl2-}$ as well. The upshot of all this is that, while it's true that you can draw certain valid resonance contributors, they may not accurately reflect the structure and electron density of the actual molecule. The structure with the negative formal charge on the bromine is best, which agrees with your intuition regarding its size and polarizability being able to stabilize the charge. Due to the disparity in electronegativity, however, the bonds are nevertheless polarized towards the chlorine atoms, so the formal charge alone doesn't fully describe the electron density correctly. Greg E.Greg E. $\begingroup$ Adding to your answer, it is obvious, that a good, representative Lewis structure for these kinds of molecules is impossible (And automagic detection by an NBO analysis will fail). Considering its structure, it is fairly obvious, that there has to be $\pi$ bonding (and antibonding), making these 3c4e bonds. I loved your poor-mans MO approach, which is not necessary as poor - pardon naïve - as you described it. $\endgroup$ – Martin - マーチン♦ Jul 16 '14 at 12:46 $\begingroup$ @Martin, thanks for the feedback. I can't take full credit for that MO analysis; I was borrowing the methodology one of my early professors applied in lecture (some years ago) to explain another 3c4e molecule, $\ce{XeF2}$. I think there's an elegant simplicity to that approach, which agrees well with more sophisticated results, and I felt confident that it should apply to this case as well. $\endgroup$ – Greg E. Jul 16 '14 at 16:09 Not the answer you're looking for? Browse other questions tagged lewis-structure or ask your own question. Limitations of Lewis structures Why does calculating formal charges help us draw correct Lewis structures? Is it possible to use the school supplied algorithm to build the Lewis diagram of the iodate ion? Lewis dot structures of CO2 Rules to identify the most stable resonance structures
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Pages.1248-1254 Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) The Effects of Additives in Napier Grass Silages on Chemical Composition, Feed Intake, Nutrient Digestibility and Rumen Fermentation Bureenok, Smerjai (Faculty of Natural Resources, Rajamangala University of Technology-Isan, Sakon Nakhon Campus) ; Yuangklang, Chalermpon (Faculty of Natural Resources, Rajamangala University of Technology-Isan, Sakon Nakhon Campus) ; Vasupen, Kraisit (Faculty of Natural Resources, Rajamangala University of Technology-Isan, Sakon Nakhon Campus) ; Schonewille, J. Thomas (Faculty of Veterinary Medicine, Department of Farm Animal Health, Division Nutrition, Utrecht University) ; Kawamoto, Yasuhiro (Faculty of Agriculture, University of the Ryukyus) Received : 2012.02.10 Accepted : 2012.06.04 https://doi.org/10.5713/ajas.2012.12081 The effect of silage additives on ensiling characteristics and nutritive value of Napier grass (Pennisetum purpureum) silages was studied. Napier grass silages were made with no additive, fermented juice of epiphytic lactic acid bacteria (FJLB), molasses or cassava meal. The ensiling characteristics were determined by ensiling Napier grass silages in airtight plastic pouches for 2, 4, 7, 14, 21 and 45 d. The effect of Napier grass silages treated with these additives on voluntary feed intake, digestibility, rumen fermentation and microbial rumen fermentation was determined in 4 fistulated cows using $4{\times}4$ Latin square design. The pH value of the treated silages rapidly decreased, and reached to the lowest value within 7 d of the start of fermentation, as compared to the control. Lactic acid content of silages treated with FJLB was stable at 14 d of fermentation and constant until 45 d of ensiling. At 45 d of ensiling, neutral detergent fiber (NDF) and acid detergent fiber (ADF) of silage treated with cassava meal were significantly lower (p<0.05) than the others. In the feeding trial, the intake of silage increased (p<0.05) in the cow fed with the treated silage. Among the treatments, dry matter intake was the lowest in the silage treated with cassava meal. The organic matter, crude protein and NDF digestibility of the silage treated with molasses was higher than the silage without additive and the silage treated with FJLB. The rumen parameters: ruminal pH, ammonia-nitrogen ($NH_3$-N), volatile fatty acid (VFA), blood urea nitrogen (BUN) and bacterial populations were not significantly different among the treatments. In conclusion, these studies confirmed that the applying of molasses improved fermentative quality, feed intake and digestibility of Napier grass. Fermentation Quality;Lactic Acid Bacteria;Molasses;Cassava Meal Supported by : National Research Council of Thailand (NRCT) AOAC, 1995. Official methods of analysis, 16th ed. Association of Official Analytical Chemists, Arlington. VA, USA. Bureenok, S., T. Namihira, S. Mizumachi, Y. Kawamoto and T. Nakada. 2006. The effect of epiphytic lactic acid bacteria with or without different byproduct from defatted rice bran and green tea waste on napiergrass (Pennisetum purpureum Schumach) silage. J. Sci. Food Agric. 86:1073-1077. https://doi.org/10.1002/jsfa.2458 Cai, Y. 2004. Methods for feed evaluation of forages: Silage Analyses. In Japan Society of Grassland Science (ed.): Field and Laboratory Methods for Grassland Science. Japan Livestock Technology Association, Tokyo, pp. 279-283. Crocker, C. L. 1967. Rapid determination of urea-nitrogen in serum or plasma without deproteinazation. Am. J. Med. Technol. 33:361-365. Danner, H., M. Holzer, E. Mayrhuber and R. Braun. 2003. Acetic acid increases stability of silage under aerobic conditions. Appl. Environ. Microbiol. 69:562-567. https://doi.org/10.1128/AEM.69.1.562-567.2003 Dubois, M., K. A. Gilles, J. K. Hamilton, P. A. Rebers and F. Smith. 1956. Calorimetric method for determination of sugars and related substances. Anal. Chem. 28:350-356. https://doi.org/10.1021/ac60111a017 Huisden, C. M., A. T. Adesogan, S. C. Kim and T. Ososanya. 2009. Effect of applying molasses or inoculants containing homofermentative or heterofermentative bacteria at two rates on the fermentation and aerobic stability of corn silage. J. Dairy Sci. 92:690-697. https://doi.org/10.3168/jds.2008-1546 Hungate, R. E. 1966. The rumen and its microbes. Academic Press, New York, USA. Jaakkola, S., V. Kaunisto and P. Huhtanen. 2006. Volatile fatty acid proportions and microbial protein synthesis in the rumen of cattle receiving grass silage ensiled with different rates of formic acid. Grass and Forage Sci. 61:282-292. https://doi.org/10.1111/j.1365-2494.2006.00532.x Kozaki, M., T. Uchimura and S. Okada. 1992. Experimental manual of lactic acid bacteria. Asakurashoten, Tokyo. Lima, R., R. F. Diaz, A. Castro, S. Hoedtke and V. Fievez. 2010. Multifactorial models to assess responses to sorghum proportion, molasses and bacterial inoculant on in vitro quality of sorghum-soybean silages. Anim. Feed Sci. Technol. 164: 161-173. McDonald, P., A. R. Henderson and S. J. E. Heron. 1991. The Biochemistry of silage (2nd Ed.) Chalcombe Publications, Marlow Bucks, UK. Muck, R. E. 1988. Factors influencing silage quality and their implications for management. J. Dairy Sci. 71:2992-3002. https://doi.org/10.3168/jds.S0022-0302(88)79897-5 Murphy, J. J. 1999. The effects of increasing the proportion molasses in the diet of milking dairy cows on milk production and composition. Anim. Feed Sci. Technol. 78:189-198. https://doi.org/10.1016/S0377-8401(99)00007-3 Ohmomo, S., O. Tanaka, H. K. Kitamoto and Y. Cai. 2002. Silage and microbial performance, old story but new problems. Jpn. Agr. Res. Q. 36:59-71. Panditharatne, S., V. G. Allen, J. P. Fontenol and M. C. N. Jayasuria. 1986. Ensiling characteristics of tropical grasses as influenced by stage of growth, additives and chopping length. J. Anim. Sci. 63:197-207. Pilachai, R., J. T. Schonewille, C. Thamrongyoswittayakul, S. Aiumlamai, C. Wachirapakorn, H. Everts and W. H. Hendriks. 2011. The effects of high levels of rumen degradable protein on rumen pH and histamine concentrations in dairy cows. J. Anim. Physiol. Anim. Nutr. 96:206-213. doi: 10.1111/j.1439-0396.2011.01139.x Rook, A. J. and M. Gill. 1990. Prediction of the voluntary intake of grass silages by beef cattle. 1. Linear regression analysis. Anim. Prod. 50:425-438. https://doi.org/10.1017/S0003356100004918 SAS Institute, 1985: User's guide: Statistics. SAS Institute Inc., Cary, NC, USA. Schonewille, J. T., A. T. Van't Klooster, J. W. Cone, H. J. Kalsbeek-Van der Valk, H. Wouterse and A. C. Beynen. 2000. Neither native nor popped cornmeal in the ration of dry cows affects magnesium absorption. Livest. Prod. Sci. 63:17-26. https://doi.org/10.1016/S0301-6226(99)00119-0 Sibanda, S. R., M. Jingura and J. H. Topps. 1997. The effect of level of inclusion of the legume Desmodium uncinatum and the use of molasses or ground maize as additives on the chemical composition of grass- and maize-legume silages. Anim. Feed Sci. Technol. 68:295-305. https://doi.org/10.1016/S0377-8401(97)00049-7 Steg, A. and J. M. Van Der Meer. 1985. Differences in chemical composition and digestibility of beet and cane molasses. Anim. Feed Sci. Technol. 13:83-91. https://doi.org/10.1016/0377-8401(85)90044-6 Umana, R., C. R. Staples, C. B. Bates, C. J. Wilcox and W. C. Mahanna. 1991. Effects of a microbial inoculant and (or) sugarcane molasses on the fermentation, aerobic stability, and digestibility of bermudagrass ensiled at two moisture contents. J. Anim. Sci. 69:4588-4601. Van Neikerk, W. A., A. Hassen, F. M. Bechaz and R. J. Coertze. 2007. Fermentative attributes of wilted vs. unwilted Digitaria eriantha silage treated with or without molasses at ensiling. S. Afr. J. Anim. Sci. 37:261-268. Van Soest, P. J., J. B. Robertson and B. A. Lewis. 1991. Methods for dietary fiber, neutral detergent fiber, and nonstarch polysaccharides in relation to animal nutrition. J. Dairy Sci. 74:3583-3597. https://doi.org/10.3168/jds.S0022-0302(91)78551-2 Wanapat, M. 2009. Potential uses of local feed resources for ruminant. Trop. Anim. Health Prod. 41:1035-1049. https://doi.org/10.1007/s11250-008-9270-y Yahaya, M. S., M. Goto, W. Yimiti, B. Smerjai and Y. Kawamoto. 2004. Additives effects of fermented juice of epiphytic lactic acid bacteria and acetic acid on silo fermentation and ruminal degradability of tropical elephant grass. J. Anim. Vet. Adv. 3: 115-121. Yang, C. M., J. S. Haung, C. T. Chang, Y. H. Cheng and C. Y. Chang. 2004. Fermentation acids, aerobic fungal growth and intake of napier grass ensiled with nonfiber carbohydrates. J. Dairy Sci. 87:630-636. https://doi.org/10.3168/jds.S0022-0302(04)73205-1 Yokota, H., T. Okajima and M. Ohshima. 1991. Effect of environmental temperature and addition of molasses on the quality of Napier grass (Pennisetum purpureum) silage. Asian-Aust. J. Anim. Sci. 4:377-382. https://doi.org/10.5713/ajas.1991.377 Yunus, M., N. Ohba, M. Shimojo, M. Furuse and Y. Masuda. 2000. Effects of adding urea and molasses on Napiergrass silage quality. Asian-Aust. J. Anim. Sci. 13:1542-1547. https://doi.org/10.5713/ajas.2000.1542 Silage preparation and fermentation quality of natural grasses treated with lactic acid bacteria and cellulase in meadow steppe and typical steppe vol.30, pp.6, 2016, https://doi.org/10.5713/ajas.16.0578 Fermentation Quality of Round-Bale Silage as Affected by Additives and Ensiling Seasons in Dwarf Napiergrass (Pennisetum purpureum Schumach) vol.6, pp.4, 2016, https://doi.org/10.3390/agronomy6040048 Elephant grass silages with or without wilting, with cassava meal in silage production vol.18, pp.3, 2017, https://doi.org/10.1590/s1519-99402017000300002 Potential of feeding beef cattle with whole corn crop silage and rice straw in Malaysia vol.50, pp.5, 2018, https://doi.org/10.1007/s11250-018-1538-2 Effect of total mixed ration silage containing agricultural by-products with the fermented juice of epiphytic lactic acid bacteria on rumen fermentation and nitrogen balance in ewes pp.1573-7438, 2019, https://doi.org/10.1007/s11250-019-01798-1
CommonCrawl
K-Means clustering and Lloyd's algorithm — 23 January 2015 The $k$-Means algorithm computes a Voronoi partition of the data set such that each landmark is given by the centroid of the corresponding cell. Let me quickly quote Wikipedia on the history of the algorithm before I explain what it is about: "The term "$k$-means" was first used by James MacQueen in 1967 (Macqueen, 1967), though the idea goes back to Hugo Steinhaus in 1957. The standard algorithm was first proposed by Stuart Lloyd in 1957 (Lloyd, 1982)." – www.wikipedia.org/k-means The $k$-Means Problem Let $X \subset \mathbb{R}^m$ be a finite collection of data points. Fix a positive integer $k$. Then our aim is to find a partition $\boldsymbol S = \{S_1, \ldots, S_k\}$ of $X$ into $k$ subsets such that it minimizes the following function \[J(\boldsymbol S) = \sum_{i=1}^k \sum_{x \in S_i} \| x - \mu(S_i) \|^2,\] where $\mu(S)$ denotes the mean of the points in $S$, i.e. \[\mu(S) = \frac{1}{|S|}\sum_{x \in S} x.\] We denote by $\mu_* \big( \boldsymbol S \big)$ the collection of means of the sets in $\boldsymbol S$. As a rule of thumb: in most of my posts, the $*$-functor applied to some construction (or function) $f$ can in functional-programming terms be translated to \(f_*(Z) := \verb+map+ \ f \ Z.\) Voronoi cells Let $(Y,d)$ be a metric space. Let $\Lambda \subset Y$ be a finite subset called the landmarks. Given a landmark $\lambda \in \Lambda$ we define its associated Voronoi cell $V_\lambda$ by \[V_\lambda := \{ y \in Y \ | \ d(y,\lambda) \leq d(y, \Lambda) \}.\] That means $V_\lambda$ consists of all points that are closer to $\lambda$ than to any of the other landmarks in $\Lambda$. Suppose we are given a subset $X \subset Y$ then we introduce the following shorthand notation for a realtive version of a Voronoi cell \[V_{X, \lambda} := V_\lambda \cap X.\] When it is clear whether we are dealing with the relative or ordinary version we may omit the extra index. We write \(V_*(\Lambda)\) resp. \((V_X)_*(\Lambda)\) for the whole collection of Voronoi cells associated to the landmarks $\Lambda$, i.e. for the relative version we have \[(V_X)_*(\Lambda) := \{ V_{X,\lambda} \ | \ \lambda \in \Lambda \}.\] Partitions and Voronoi cells Suppose we have a discrete set $X$ embedded in $m$-dimensional Euclidean space $\mathbb{R}^m$ endowed with the Euclidean metric $d$. Suppose further we have chosen a family $\Lambda = \{\lambda_1,\ldots,\lambda_k\}$ of landmarks. We would like to produce a partition of $X$, i.e. a decomposition of $X$ into mutually disjoint sets, based on the Voronoi cells associated to $\Lambda$. However we are facing an ambiguity for points $x \in X$ with \[d(x, \lambda_i) = d(x, \lambda_j), \text{ for some $i \neq j$}.\] We have to make a choice to which set we are assigning $x$ (and from which cell we are removing $x$). For the remaining part of this post we will: Assign $x$ to the cell with the lower index, and remove it from the other. There is no particular reason to go with this rule other than it is the easiest I could come up with. After reassigning all problematic points we end up with an honest partition of $X$. We will continue to denote these sets by $V_\lambda$ resp. $V_{X,\lambda}$, and continue to refer to them as Voronoi cells. Lloyd's algorithm Computing the minimum of the function $J$ described above is usually too expensive. Instead one uses a heuristic algorithm to compute a local minimum. The most common algorithm is Lloyd's algorithm which I will sketch in the following: Suppose $X$ is a finite discrete set in $m$-dimensional Euclidean space $\mathbb{R}^m$ endowed with the Euclidean metric $d$. Suppose further we fixed a positive integer $k$. Then choose an arbitrary partition $\boldsymbol S$, i.e. a decomposition of $X$ into a family mutually disjoint sets $S_1,\ldots,S_k \subset X$. Then define a sequence $(C_n)_{n \in \mathbb{N}}$ of partitions as follows \[C_n := L^n(\boldsymbol S) \ \ \ \text{, where } L:= (V_X \circ \mu)_*.\] It is not hard to show that this sequence converges (see the section below). Hence one can define the result of Lloyd's algorithm applied to the initial partition $\boldsymbol S$ as follows \[\mathscr{L}_{V,\mu}\big(\boldsymbol S \big) := \lim_{n \to \infty} C_{n} .\] Convergence of $C_n$ Observe that for any partition $\boldsymbol S$ of $X$ we have \[J(\boldsymbol S) \geq J \big( L(\boldsymbol S) \big).\] Applying $L$ to $S$ results in two changes the objective $J = \sum_i\sum_{x \in S_i} \| x - \mu_i \|^2$: First, replace $S_i$ by $V_{\mu_i}$ which makes $J$ smaller by definition of a Voronoi cell. Second, replace $\mu_i$ by $\mu(V_{\mu_i})$. Note that $\mu(V)$ minimizes $\sum_{x\in V} \| x - \mu \|^2$ and thus the second change makes the sum smaller as well. Furthermore equality holds if and only if $\boldsymbol S = L(\boldsymbol S)$. Therefore $J(C_n)$ defines a descending sequence in $\mathbb{R}$ which is bounded below by zero, hence it converges. Since the set of partitions of $X$ is finite $J$ only takes a finite number of values. This implies that $J(C_n)$ takes a constant value for $n$ sufficiently large. By the observation above this implies that $C_n$ is constant for sufficiently large $n$ as well. A solution to the $k$-Means problem will always be a partition of $X$ into $k$ subsets. A solution to the problem is not always suited to be interpreted as partition into "clusters". Imagine a point cloud that is distributed according to a probability distribution centered along a straight line. Intuitively one would suggest a single "connected" cluster. However $k$-means by definition would suggest otherwise. Without further analysis we couldn't tell the difference of that particular data set and another one scattered around $k$ different centers. So one should really see the solution for what it is. A partition, or "cover", of $X$. Luckily there are further directions to go from here and to build on top of $k$-Means. We briefly sketch one of those possible extensions in the section below. Where to go from here - Witness complexes Let $\boldsymbol S = \{ S_1,\ldots,S_k \} $ be a partition of $X$ obtained by the Lloyd's algorithm say. We would like to associate a simplicial complex to $\boldsymbol S$. In a previous post on the Mapper construction I explained how to construct the nerve of a covering of $X$. However, since $\boldsymbol S$ is a partition, i.e. the sets are mutually disjoint, this construction will only yield a trivial zero-dimensional complex. All we have to do is to slightly enlarge the sets in the partition $\boldsymbol S$. For $\varepsilon > 0$ we define \[\boldsymbol S_\varepsilon := \big\{ N_\varepsilon(S_1), \ldots, N_\varepsilon(S_k) \big\},\] where $N_\varepsilon(S)$ denotes the epsilon neighbourhood of $S$ in $X$, i.e. the set of points in $x$ whose distance to $S$ is at most $\varepsilon$. We can compute the nerve $\check{N}(\boldsymbol S_\varepsilon)$ of the enlarged cover $\boldsymbol S_\varepsilon$. For the "right" choice of $\varepsilon$ we are now able to distinguish the two data sets given in the previous section. A construction that is closely related (almost similar) to the above is the following. Let $\Lambda$ be a family of landmarks. The strong Witness complex $\mathcal{W}^s(X,\Lambda ,\varepsilon)$ (cf. (Carlsson, 2009)) is defined to be the complex whose vertex set is $\Lambda$, and where a collection $(\lambda_{1},\ldots, \lambda_{i})$ defines an $l$-simplex if and only if there is a witness $x \in X$ for $(\lambda_{1},\ldots, \lambda_{i})$, i.e. \[d(x,\lambda_{j}) \leq d(x,\Lambda) + \varepsilon, \text{ for $j=1,...,i$}.\] Macqueen, J. (1967). Some methods for classification and analysis of multivariate observations. In 5-Th Berkeley Symposium on Mathematical Statistics and Probability, 281–297. Lloyd, S. (1982). Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2), 129–137. https://doi.org/10.1109/TIT.1982.1056489 Carlsson, G. (2009). Topology and Data. Bull. Amer. Math. Soc., 46, 255–308. Copyright © Mirko Klukas 2022
CommonCrawl
Chapter 13 Modeling continuous relationships Most people are familiar with the concept of correlation, and in this chapter we will provide a more formal understanding for this commonly used and misunderstood concept. In 2017, the web site Fivethirtyeight.com published a story titled Higher Rates Of Hate Crimes Are Tied To Income Inequality which discussed the relationship between the prevalence of hate crimes and income inequality in the wake of the 2016 Presidential election. The story reported an analysis of hate crime data from the FBI and the Southern Poverty Law Center, on the basis of which they report: "we found that income inequality was the most significant determinant of population-adjusted hate crimes and hate incidents across the United States". The data for this analysis are available as part the fivethirtyeight package for the R statistical software, which makes it easy for us to access them. The analysis reported in the story focused on the relationship between income inequality (defined by a quantity called the Gini index — see Appendix for more details) and the prevalence of hate crimes in each state. Figure 13.1: Plot of rates of hate crimes vs. Gini index. The relationship between income inequality and rates of hate crimes is shown in Figure 13.1. Looking at the data, it seems that there may be a positive relationship between the two variables. How can we quantify that relationship? One way to quantify the relationship between two variables is the covariance. Remember that variance for a single variable is computed as the average squared difference between each data point and the mean: \[ s^2 = \frac{\sum_{i=1}^n (x_i - \bar{x})^2}{N - 1} \] This tells us how far each observation is from the mean, on average, in squared units. Covariance tells us whether there is a relation between the deviations of two different variables across observations. It is defined as: \[ covariance = \frac{\sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})}{N - 1} \] This value will be far from zero when individual data points deviate by similar amounts from their respective means; if they are deviant in the same direction then the covariance is positive, whereas if they are deviant in opposite directions the covariance is negative. Let's look at a toy example first. The data are shown in Table 13.1, along with their individual deviations from the mean and their crossproducts. Table 13.1: Data for toy example of covariance y_dev x_dev crossproduct 3 5 -3.6 -4.6 16.56 8 7 -1.6 0.4 -0.64 10 10 1.4 2.4 3.36 12 17 8.4 4.4 36.96 The covariance is simply the mean of the crossproducts, which in this case is 17.05. We don't usually use the covariance to describe relationships between variables, because it varies with the overall level of variance in the data. Instead, we would usually use the correlation coefficient (often referred to as Pearson's correlation after the statistician Karl Pearson). The correlation is computed by scaling the covariance by the standard deviations of the two variables: \[ r = \frac{covariance}{s_xs_y} = \frac{\sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})}{(N - 1)s_x s_y} \] In this case, the value is 0.89. The correlation coefficient is useful because it varies between -1 and 1 regardless of the nature of the data - in fact, we already discussed the correlation coefficient earlier in our discussion of effect sizes. As we saw in that previous chapter, a correlation of 1 indicates a perfect linear relationship, a correlation of -1 indicates a perfect negative relationship, and a correlation of zero indicates no linear relationship. The correlation value of 0.42 between hate crimes and income inequality seems to indicate a reasonably strong relationship between the two, but we can also imagine that this could occur by chance even if there is no relationship. We can test the null hypothesis that the correlation is zero, using a simple equation that lets us convert a correlation value into a t statistic: \[ \textit{t}_r = \frac{r\sqrt{N-2}}{\sqrt{1-r^2}} \] Under the null hypothesis \(H_0:r=0\), this statistic is distributed as a t distribution with \(N - 2\) degrees of freedom. We can compute this using our statistical software: ## Pearson's product-moment correlation ## data: hateCrimes$avg_hatecrimes_per_100k_fbi and hateCrimes$gini_index ## t = 3, df = 48, p-value = 0.002 ## alternative hypothesis: true correlation is not equal to 0 ## 95 percent confidence interval: ## 0.16 0.63 ## sample estimates: ## cor ## 0.42 This test shows that the likelihood of an r value this extreme or more is quite low under the null hypothesis, so we would reject the null hypothesis of \(r=0\). Note that this test assumes that both variables are normally distributed. We could also test this by randomization, in which we repeatedly shuffle the values of one of the variables and compute the correlation, and then compare our observed correlation value to this null distribution to determine how likely our observed value would be under the null hypothesis. The results are shown in Figure 13.2. The p-value computed using randomization is reasonably similar to the answer given by the t-test. Figure 13.2: Histogram of correlation values under the null hypothesis, obtained by shuffling values. Observed value is denoted by blue line. We could also use Bayesian inference to estimate the correlation; see the Appendix for more on this. You may have noticed something a bit odd in Figure 13.1 – one of the datapoints (the one for the District of Columbia) seemed to be quite separate from the others. We refer to this as an outlier, and the standard correlation coefficient is very sensitive to outliers. For example, in Figure 13.3 we can see how a single outlying data point can cause a very high positive correlation value, even when the actual relationship between the other data points is perfectly negative. Figure 13.3: An simulated example of the effects of outliers on correlation. Without the outlier the remainder of the datapoints have a perfect negative correlation, but the single outlier changes the correlation value to highly positive. One way to address outliers is to compute the correlation on the ranks of the data after ordering them, rather than on the data themselves; this is known as the Spearman correlation. Whereas the Pearson correlation for the example in Figure 13.3 was 0.83, the Spearman correlation is -0.45, showing that the rank correlation reduces the effect of the outlier and reflects the negative relationship between the majority of the data points. We can compute the rank correlation on the hate crime data as well: ## Spearman's rank correlation rho ## S = 20146, p-value = 0.8 ## alternative hypothesis: true rho is not equal to 0 ## rho ## 0.033 Now we see that the correlation is no longer significant (and in fact is very near zero), suggesting that the claims of the FiveThirtyEight blog post may have been incorrect due to the effect of the outlier. When we say that one thing causes another, what do we mean? There is a long history in philosophy of discussion about the meaning of causality, but in statistics one way that we commonly think of causation is in terms of experimental control. That is, if we think that factor X causes factor Y, then manipulating the value of X should also change the value of Y. In medicine, there is a set of ideas known as Koch's postulates which have historically been used to determine whether a particular organism causes a disease. The basic idea is that the organism should be present in people with the disease, and not present in those without it – thus, a treatment that eliminates the organism should also eliminate the disease. Further, infecting someone with the organism should cause them to contract the disease. An example of this was seen in the work of Dr. Barry Marshall, who had a hypothesis that stomach ulcers were caused by a bacterium (Helicobacter pylori). To demonstrate this, he infected himself with the bacterium, and soon thereafter developed severe inflammation in his stomach. He then treated himself with an antibiotic, and his stomach soon recovered. He later won the Nobel Prize in Medicine for this work. Often we would like to test causal hypotheses but we can't actually do an experiment, either because it's impossible ("What is the relationship between human carbon emissions and the earth's climate?") or unethical ("What are the effects of severe abuse on child brain development?"). However, we can still collect data that might be relevant to those questions. For example, we can potentially collect data from children who have been abused as well as those who have not, and we can then ask whether their brain development differs. Let's say that we did such an analysis, and we found that abused children had poorer brain development than non-abused children. Would this demonstrate that abuse causes poorer brain development? No. Whenever we observe a statistical association between two variables, it is certainly possible that one of those two variables causes the other. However, it is also possible that both of the variables are being influenced by a third variable; in this example, it could be that child abuse is associated with family stress, which could also cause poorer brain development through less intellectual engagement, food stress, or many other possible avenues. The point is that a correlation between two variables generally tells us that something is probably causing somethign else, but it doesn't tell us what is causing what. One useful way to describe causal relations between variables is through a causal graph, which shows variables as circles and causal relations between them as arrows. For example, Figure 13.4 shows the causal relationships between study time and two variables that we think should be affected by it: exam grades and exam finishing times. However, in reality the effects on finishing time and grades are not due directly to the amount of time spent studying, but rather to the amount of knowledge that the student gains by studying. We would usually say that knowledge is a latent variable – that is, we can't measure it directly but we can see it reflected in variables that we can measure (like grades and finishing times). Figure 13.5 shows this. Figure 13.4: A graph showing causal relationships between three variables: study time, exam grades, and exam finishing time. A green arrow represents a positive relationship (i.e. more study time causes exam grades to increase), and a red arrow represents a negative relationship (i.e. more study time causes faster completion of the exam). Figure 13.5: A graph showing the same causal relationships as above, but now also showing the latent variable (knowledge) using a square box. Here we would say that knowledge mediates the relationship between study time and grades/finishing times. That means that if we were able to hold knowledge constant (for example, by administering a drug that causes immediate forgetting), then the amount of study time should no longer have an effect on grades and finishing times. Note that if we simply measured exam grades and finishing times we would generally see negative relationship between them, because people who finish exams the fastest in general get the highest grades. However, if we were to interpret this correlation as a causal relation, this would tell us that in order to get better grades, we should actually finish the exam more quickly! This example shows how tricky the inference of causality from non-experimental data can be. Within statistics and machine learning, there is a very active research community that is currently studying the question of when and how we can infer causal relationships from non-experimental data. However, these methods often require strong assumptions, and must generally be used with great caution. After reading this chapter, you should be able to: Describe the concept of the correlation coefficient and its interpretation Compute the correlation between two continuous variables Describe the effect of outlier data points and how to address them. Describe the potential causal influences that can give rise to an observed correlation. The Book of Why by Judea Pearl - an excellent introduction to the ideas behind causal inference. Before we look at the analysis reported in the story, it's first useful to understand how the Gini index is used to quantify inequality. The Gini index is usually defined in terms of a curve that describes the relation between income and the proportion of the population that has income at or less than that level, known as a Lorenz curve. However, another way to think of it is more intuitive: It is the relative mean absolute difference between incomes, divided by two (from https://en.wikipedia.org/wiki/Gini_coefficient): \[ G = \frac{\displaystyle{\sum_{i=1}^n \sum_{j=1}^n \left| x_i - x_j \right|}}{\displaystyle{2n\sum_{i=1}^n x_i}} \] Figure 13.6: Lorenz curves for A) perfect equality, B) normally distributed income, and C) high inequality (equal income except for one very wealthy individual). Figure 13.6 shows the Lorenz curves for several different income distributions. The top left panel (A) shows an example with 10 people where everyone has exactly the same income. The length of the intervals between points are equal, indicating each person earns an identical share of the total income in the population. The top right panel (B) shows an example where income is normally distributed. The bottom left panel shows an example with high inequality; everyone has equal income ($40,000) except for one person, who has income of $40,000,000. According to the US Census, the United States had a Gini index of 0.469 in 2010, falling roughly half way between our normally distributed and maximally inequal examples. We can also analyze the FiveThirtyEight data using Bayesian analysis, which has two advantages. First, it provides us with a posterior probability – in this case, the probability that the correlation value exceeds zero. Second, the Bayesian estimate combines the observed evidence with a prior, which has the effect of regularizing the correlation estimate, effectively pulling it towards zero. Here we can compute it using BayesFactor package in R. ## Bayes factor analysis ## -------------- ## [1] Alt., r=0.333 : 21 ±0% ## Against denominator: ## Null, rho = 0 ## --- ## Bayes factor type: BFcorrelation, Jeffreys-beta* ## Summary of Posterior Distribution ## Parameter | Median | 95% CI | pd | ROPE | % in ROPE | BF | Prior ## ---------------------------------------------------------------------------------------------- ## rho | 0.38 | [0.13, 0.58] | 99.88% | [-0.05, 0.05] | 0% | 20.85 | Beta (3 +- 3) Notice that the correlation estimated using the Bayesian method (0.38) is slightly smaller than the one estimated using the standard correlation coefficient (0.42), which is due to the fact that the estimate is based on a combination of the evidence and the prior, which effectively shrinks the estimate toward zero. However, notice that the Bayesian analysis is not robust to the outlier, and it still says that there is fairly strong evidence that the correlation is greater than zero (with a Bayes factor of more than 20).
CommonCrawl
Local differential privacy for human-centered computing Xianjin Fang1, Qingkui Zeng1 & Gaoming Yang ORCID: orcid.org/0000-0002-7666-10381 Human-centered computing in cloud, edge, and fog is one of the most concerning issues. Edge and fog nodes generate huge amounts of data continuously, and the analysis of these data provides valuable information. But they also increase privacy risks. The personal sensitive data may be disclosed by untrusted third-party service providers, and the current solutions to privacy protection are inefficient, costly. It is difficult to obtain available statistics. To solve these problems, we propose a local differential privacy sensitive data collection protocol in human-centered computing. Firstly, to maintain high data utility, the selection of the optimal number of hash functions and the mapping length is based on the size of the collected data. Secondly, we hash the sensitive data, add the appropriate Laplace noise to the client side, and send the reports to the server side. Thirdly, we construct the count sketch matrix to obtain privacy statistics on the server side. Finally, the utility of the proposed protocol is verified by synthetic datasets and a real dataset. The experimental results demonstrate that the protocol can achieve a balance between data utility and privacy protection. With the development of Internet of Things, technology [1], edge and fog nodes [2], mobile phones, smart cars, wearable devices, and sensor networks have increasingly become the sources of big data [3]. Human-centered computing in cloud [4], edge, and fog has become necessary tasks for enterprises and governments [5]. On the one hand, big data collection and analysis can be used to train machine learning models and to understand user group characteristics to improve user experienc e[6]; on the other hand, deriving sensitive data, such as user preferences, lifestyle habits, and location information [7], can result in privacy leaks. Researchers have conducted many studies on how to prevent the disclosure of personal sensitive information [8] and have proposed many privacy protocols [9]. Differential privacy (DP) [10] is a widely studied privacy-preserving model; it requires that the addition or deletion of any one record does not affect the query results. The traditional differential privacy model is deployed on the central server, and data collected from different sources are transformed into aggregate response queries for privacy protection, i.e., the central server publishes query information that satisfies differential privacy. Therefore, differential privacy is widely applied in all aspects of big data collection. For example, the US Census Bureau uses differential privacy for demographics [11]. However, in the data collection phase, there is little oversight over third-party service providers; consequently, privacy leaks frequently occur, which happen on Facebook [12] and Snapchat [13]. Such frequent privacy disclosures have attracted the public's attention, but in practice, it is very difficult to find a trusted third-party aggregator. This difficulty limits the application of traditional differential privacy to a certain extent. Therefore, it is necessary to consider how to ensure that private information is not disclosed when there is no trusted third-party service provider. As a result of extensive research on differential privacy, local differential privacy (LDP) is based on traditional differential privacy protection [14]. LDP can obtain valuable information by aggregating clients' perturbed reports without obtaining real released data information, and it can prevent untrusted third parties from revealing privacy. LDP can be applied to various data collection scenarios, such as frequency estimation, heavy hitters identification, and frequent itemset mining. Companies in different fields, such as Google [15] and Apple [16], have used LDP protocols to collect users' default browser homepages and search engine settings, which can identify harmful or malicious hijacking user settings [17] and find the most frequently used emojis or words. However, the LDP model still has shortcomings with respect to big data collection, such as low accuracy, high space-time complexities, and statistical errors. Since different tasks require adopting different LDP protocols in actual applications, to determining the appropriate parameters for each protocol is difficult, which undoubtedly increases the cost of using LDP to protect sensitive data. To solve these problems, we propose using the count sketch [18] and Laplace mechanism [10] to reduce space-time complexity and computational overhead and to obtain high data utility under different distributions. The main contributions of this paper are as follows: (i) We design the LDP protocol to provide a controllable privacy guarantee level on the client side that does not require trusted third-party servers; (ii) The proposed protocol solves the problems of large space-time overhead and low data utility and can be applied to different data distributions; (iii) Experiments show that the proposed protocol can provide available statistical information while protecting user data privacy. This paper is organized as follows. First, we describe related works in Section 2 and the background knowledge for this paper in Section 3. Next, we introduce the current protocols for big data collection in Section 4 and propose our method in Section 6. Then, we evaluate our method in Section 6 and analyze the results in Section 7. At last, we make a summary of our work in Section 8. Many scholars and enterprises have studied how to apply LDP in cloud, edge, and fog scenarios and how to improve the performance of LDP protocols. For example, Erlingsson et al. [15] propose the RAPPOR protocol, which uses a Bloom filter and random response to implement an LDP frequency estimated in the Chrome browser. In reference [16], Apple's Differential Privacy team proposes using one-hot encoding technology to encode sensitive data and deploy a CMS algorithm for analyzing the most popular emojis and media playback preferences in Safari. Fanti et al. [19] propose the unknown-RAPPOR protocol to estimate frequency without a data dictionary. Ding et al. [20] propose an algorithm to solve the problem of privacy disclosure when repeatedly collecting telemetry data and apply the algorithm to Microsoft-related products. Wang et al. [14] propose the Harmony protocol to achieve LDP protection. These authors compute the numerical attributes means, and the protocol is deployed in the Samsung's system software. Wang et al. [21] use the LDP protocol to answer private multidimensional queries on Alibaba's e-commerce transaction records. LDP has been deployed in the industry, and it has created practical benefits. One of LDP's important applications is frequency estimation. Wang et al. [22] propose an OLH algorithm to obtain lower estimation error and to reduce the communication overhead in a larger domain; the algorithm can also be used in heavy hitters identification. The Hadamard response [23] uses the Hadamard transform instead of the hash function; this is because the actual calculation of Hadamard entries is easier, the server-side aggregates report faster. Joseph et al. [24] propose a technique that repeatedly recomputes a statistic with the error which leads to the decays of errors; it happens when the statistic changes significantly rather than the current value of the statistic is recomputed. Wang et al [25] introduce a method that adds postprocessing steps to frequency estimations to make them consistent while achieving high accuracy for a wide range of tasks. Most of the LDP protocols for frequency estimation are implemented by random responses, thereby resulting in low accuracy of the estimation results. Thus, the goal of this paper is to investigate the mechanisms that can achieve LDP with high data utility. Recent LDP studies have also focused on other applications [26, 27]. Bassily et al. [28] propose an S-HIST algorithm for histogram release and utilize random projection technology to further reduce the communication cost. Wang et al. [22] propose identifying heavy hitters in datasets under LDP protection. Ren et al. [29] apply the LDP model to solve the privacy problem in the case of collecting high-dimensional crowdsourced data. Wang et al. [30] propose privacy amplification by multiparty differential privacy, which introduces an auxiliary server between the client side and the server side. Ye et al .[31] propose PrivKV, which can estimate the mean and frequency of key-value data, and PrivKVM, which can improve estimation accuracy through multiple iterations. Therefore, many LDP protocols have been proposed to solve privacy issues in cloud, edge, and fog computing scenarios. Differential privacy requires that any tuple in the dataset be under a limited impact. For example, for two datasets D and D′ that are different by only one tuple, the attacker cannot infer the sensitive information of a specific data tuple from the query results, so it is impossible to know whether the data of a certain user exist in the dataset. The definition of differential privacy is as follows: Definition 1ε-differential privacy [10]. Where ε > 0, a randomized mechanism M satisfies ε-differential privacy if for all datasets D and D′ that differ at most one tuple, and all S⊆Range(M),;we have $$ \Pr \left[M(D)\in S\right]\le {e}^{\varepsilon}\cdot \Pr \left[M\left(D^{\prime}\right)\in S\right] $$ Local differential privacy Local differential privacy requires any two tuples to be indistinguishable. For example, for any two tuples x and x′, the attacker cannot infer the sensitive information of a specific data tuple from the query results, so it is impossible to know the specific tuple. ε- local differential privacy is defined as follows: Definition 2ε-local differential privacy [14]. Where ε > 0, a randomized mechanism M satisfies ε-local differential privacy if and only if for any two input tuples x and x′ in the domain of M, and for any possible output x* of M, we have $$ \Pr \left[M(x)={x}^{\ast}\right]\le {e}^{\varepsilon}\cdot \Pr \left[M\left(x\hbox{'}\right)={x}^{\ast}\right] $$ It can be concluded from the definition of ε-local differential privacy that the output of a randomized mechanism of any pair of input tuples is similar, and therefore cannot be inferred by the specific input tuple. A smaller privacy budget ε ensures a higher privacy level, but repeating the queries for the same tuple will consume ε, thereby decreasing the level of privacy. Therefore, the choice of ε needs to be determined according to the specific scenario. Sensitivity and Laplace mechanism Differential privacy implements privacy protection by adding noise to the query results, and the amount of noise added should not only protect user privacy but also maintain data utility. Therefore, sensitivity becomes a key parameter of noise control. In local differential privacy, sensitivity is based on a query function of any two tuples; the following definitions are given. Definition 3 Local sensitivity [32]. For f: Dn → Rd and x∈Dn, the local sensitivity of f at x (with respect to the l1 metric) is: $$ L{S}_f(x)=\underset{y:d\left(x,y\right)=1}{\max }{\left\Vert f(x)-f(y)\right\Vert}_1 $$ The notion of local sensitivity is a discrete analog of the Laplacian (or maximum magnitude of the partial derivative in different directions). The Laplace mechanism of differential privacy [10] adds noise that satisfies the Laplace distribution to the original dataset to implement differential privacy protection; therefore, we have the following definition. Definition 4 Laplace mechanism [33]. An algorithm A takes as input a dataset D, and some ε > 0, a query Q with computing function f: Dn → Rd, and outputs $$ A(D)=f(D)+\left({Y}_1,...,{Y}_d\right) $$ where the Yi is drawn i.i.d from Lap(LSf(x)/ε), thus obeying the Laplace distribution with the scale parameter (LSf(x)/ε). For ease of expression, we denote Δs as the local sensitivity and Lap(λ) as a random variable that obeys the Laplace distribution of scale λ. The corresponding probability density function is \( \mathrm{pdf}(y)=1/2{\uplambda \mathrm{e}}^{\left(-\frac{\mid y\mid }{\lambda}\right)} \). System structure Local differential privacy can be seen as a special case of differential privacy [34]. Compare with the perturbation process in DP, the perturbation process in LDP shifts from the server side to the client side. The privacy leakage threat from untrusted third-party servers is eliminated because trusted third-party servers are not required. This collection consists of the following main parts: The encoding is performed by the client side; each tuple should be encoded into a proper vector to ensure perturbation; The perturbation is performed by the client side, and each piece of encoded data generates a perturbed report by the random function, thereby satisfying the definition of ε-local differential privacy. Then, the client side sends these perturbed reports to the server; The aggregation process is performed by the server side, which aggregates reports from the client side and generates available statistics, as shown in Fig. 1. Local differential privacy system structure Problem setting To use the LDP model to analyze and protect the collected data, some scholars have proposed many privacy protection schemes when estimating frequency. However, these solutions still have deficiencies, such as high computational overhead and low data utility. Therefore, we propose a modified solution to further improve data utility and algorithm accuracy based on solving existing deficiencies. Analyzing current methods Generalized random response (GRR) This random response technique was proposed by Warner et al. [35]. For each piece of collected private data v∈D, the user sends the true value of v with probability p and sends randomly selected value v′ from D\{v} with a probability 1 − p. Assuming that domain D contains d = |D| values, the perturbation function is as follows: $$ \Pr \left[\mathrm{GRR}(D)=y\right]=\Big\{{\displaystyle \begin{array}{l}p=\frac{e^{\varepsilon }}{e^{\varepsilon }+d-1},\kern0.5em \mathrm{if}\ y=v\\ {}q=\frac{1}{e^{\varepsilon }+d-1},\kern0.5em \mathrm{if}\ y\ne v\kern0.5em \end{array}} $$ Since p/q = eε, the ε-differential privacy definition is satisfied. Optimal local hash (OLH) The OLH protocol was proposed in [22] to address the problem of large category attributes. First, the client-side algorithm maps the user's true value v to a smaller hash value domain g by using a hash function. Then, the algorithm performs a random response to the hash value of this smaller domain. The parameter g is a trade-off for the loss of information between the hashing and randomization step; when g = eε + 1, the trade-off is optimal. The time complexity of the algorithm is O(logn), and the space complexity is O(nlog|D|). Randomized aggregatable privacy-preserving ordinal response (RAPPOR) The RAPPOR protocol [15] is deployed in Google's Chrome browser. In the RAPPOR protocol, the user's real value v is encoded into the bit vector B. When there are numerous category attributes, the protocol causes problems, such as a high communication cost and low accuracy. Therefore, RAPPOR uses the Bloom filter for encoding. The value v is mapped to a different position in the bit vector B using k hash functions, i.e., the corresponding position is set to 1, and the remaining positions are set to 0. After encoding, RAPPOR utilizes a perturbation function to obtain the perturbed bit vector B′. Hadamard Count Mean Sketch (HCMS) The Hadamard Count Mean Sketch protocol was proposed by Apple's Differential Privacy Team [16] in 2016 to complete large-scale data collection with LDP and to obtain accurate counts. By utilizing the Hadamard transform, the sparse vector is transformed to send a single privacy bit, so each user just sends one private bit. A certain tuple x sent by a given user belongs to a set of values D; j is a randomly selected index from k hash functions, and l is a randomly selected index from the m bits of the hash map domain. Algorithm 1 shows the client's perturbation process. First, each user initializes the vector v and sets the mapping value of the attribute value d in v at the j-th hash function to 1, and the vector v forms a one-hot vector. Second, the algorithm randomly flips the l-th bit of the vector, denoted as wl∈{− 1,1}, with a probability of (1/eε + 1). Finally, the client side sends the report s{wl, j, l} to the server. The time complexity of the algorithm is O(n+kmlog(m)+|D|k), and the space complexity is O(log(k) + log(m) + 1). Algorithm 2 shows the server aggregation process. First, it takes each report w(i) and transforms it to x(i). Then, the server constructs the sketch matrix MH and add x(i) to row j(i), column l(i) of MH. Next, it uses the transpose Hadamard matrix to transform the rows of sketch back. At last, the server estimates the count of entry d∈|D| by debiasing the count and averaging over the corresponding hash entries in MH. Deficiencies of current protocols Many current protocols have been proposed to protect privacy, but they still have deficiencies. First, the LDP protocol has very strict requirements for selecting parameters and concerning the size of the data. For example, the choice of parameters k and m in the HCMS algorithm greatly influences data variance and utility, and different tasks need to identify different suitable parameters. Second, the RAPPOR and CMS algorithms have large space-time complexity and a high communication cost. For data collection in cloud, edge, and fog scenarios, this problem will make computation highly inefficient. Third, due to the use of random response techniques, a data value with low frequency can even be estimated as negative. Finally, when privacy-preserving data are from different distributions, data utility varies greatly, and it is difficult to fit these data to different tasks. Design method to address deficiencies Because of the shortcomings of current LDP protocols, we use the Laplace mechanism to solve the problem that random response technology requires strict data size and construct the count sketch matrix for aggregation to reduce space-time complexity. The protocol consists of the local perturbator and aggregator. The local perturbator is designed on the client side to perturb the raw data. When a user generates data, the local perturbator selects a random hash function to encode the data as a one-hot encoding and adds the Laplace noises in the mapping location. Then, the local perturbator sends the report containing the selected hash function index and the noised mapping location to the central server. Since the client-side algorithm satisfies the LDP definition, even if the adversary has the relevant background knowledge and acquires another user's data, the adversary cannot infer which data are the user's data. The process is shown in Fig. 2 a. The algorithm system structure. a Perturbation. b Aggregation The aggregator is designed on the server side to aggregate the reports. When the central server receives all the perturbed reports from the client side, the server will aggregate them through an aggregator. The aggregator structures the count sketch matrix and cumulates the number of mapping positions for each attribute value under different hash functions. The server side obtains each data value frequency estimation by matrix count. Estimating the data entry d as an example, the aggregator counts the number of xj's frequency of the corresponding mapping position under different hash functions and sums up the numbers, as shown in Fig. 2 b. For human-centered computing in cloud, edge, and fog scenarios, there are generally many users and one data service provider in the LDP model. Therefore, the proposed protocol designs the client and server algorithm for the users and service providers, respectively. The client-side algorithm perturbs the user's raw data and sends an incomplete report; each user sends the perturbed report to the unique service provider. When the service provider receives the user's perturbed reports, the server-side algorithm aggregates the reports to obtain the available statistics. Client-side algorithm design The client-side algorithm is designed to prevent user data leakage by ensuring that the perturbed data obey local differential privacy. The raw data are first encoded by a randomly selected hash function, and then the Laplace mechanism is applied to implement the perturb operation in the hash map location. The parameters passed by the server are used before the algorithm is deployed; these parameters include the privacy budget ε, the number of hash functions k, and the length of hash mapping bit m. According to equation (3), any two different pieces of data have at the most two differences in the one-hot encoding vector v, so the local sensitivity of adding Laplace noise is 2. The report sent by the algorithm includes two parts: a randomly selected hash function index j and a hashed map position with noise l′. For convenience, this algorithm is named the Laplace Count Sketch (LCS) client-side algorithm. The specific steps are shown in Algorithm 3. Line 1 of Algorithm 2 initializes an all-zero vector of length m, and in lines 2–4, the algorithm randomly selects the hash function and hash mapping on vector v, where hj(d) denotes choosing the function j to hash data d. In addition, the mapping position is added with the Laplace noise in lines 5–7. Since the mapping position is an integer, the noise value should be rounded. As is already known, the length of the mapping vector is m. For the added noise mapping position l′, l′ will equal l′ minus m if the value of l′ is greater than or equal to m; else, l′ equals l′ plus m if the value of l′ is less than 0. The algorithm sends the perturbed report si in line 8. Each user sends the perturbed report with O(1) time complexity and O(k+m) space complexity; therefore, the time complexity of the client-side algorithm is O(n), and the space complexity of the client-side algorithm is O(n(k+m)). Server-side algorithm design The server-side constructs the count sketch matrix using the same parameters as those used by the client side after collecting perturbed reports from different users. First, the server-side algorithm constructs an all-zero matrix of size k*m. Second, the algorithm cumulates the index positions for each report position. Third, after constructing the completed count sketch matrix, the algorithm searches the position of each data value corresponding to the row in the matrix in different hash functions and adjusts the sketch counts according to Laplace distribution. Finally, the algorithm computes the counts at these positions to obtain the frequency statistics of each attribute value. The specific steps are shown in Algorithm 4. Line 1 of Algorithm 3 initializes an all-zero matrix of size k*m. In line 2, the algorithm deals with the collected n reports and adds 1 to the index position of the corresponding row and column in the matrix. Then, in line 3, the algorithm uses the count sketch to record the matrix value of each data at the corresponding position of each hash function and estimates the frequency of each attribute value. The time and space complexities are O(n+|D|*k) and O(k*m), respectively, in the server-side algorithm. Privacy and utility analysis This section discusses the privacy and utility of the proposed protocol. We first prove that the LCS protocol satisfies the local differential privacy definition and then theoretical analysis of the algorithm's variance, and a smaller variance ensures the higher data utility. Theorem 1 The LCS protocol satisfies the definition of ε- local differential privacy. Proof. Given any pair of input tuples x and x′ and any possible output x*, px is labeled as the probability density function of A(x), and px′ is labeled as the probability density function of A(x′); compare the probability of these two. $$ {\displaystyle \begin{array}{c}\frac{\Pr \left[A(x)={x}^{\ast}\right]}{\Pr \left[A\left(x^{\prime}\right)={x}^{\ast}\right]}=\frac{\Pr \left[x+y\right]}{\Pr \left[x^{\prime }+y\right]}\\ {}=\frac{p_x(y)}{p_y(y)}\\ {}={e}^{\varepsilon \left(\frac{\mid f(x)-f\left(x\prime \right)\mid }{\varDelta s}\right)}\\ {}\le {e}^{\varepsilon}\end{array}} $$ Since the sensitivity Δs = 2, the maximum difference between the values of the functions f(x) and f(x′) is 2; that is, |f(x)−f(x′)| has a value range of [0, 2], so (|f(x) − f(x′)|/Δs) ≤ 1. Therefore, the definition of local differential privacy is satisfied. We infer the variance of the LCS algorithm and denote the estimated frequency f′ (d), the real frequency f(d), and \( \mathbbm{E}\left(f^{\prime }(d)\right)=f(d) \). $$ {\displaystyle \begin{array}{c} Var\left(f^{\prime }(d)\right)=\mathbbm{E}\left(f^{\prime }{(d)}^2\right)-\mathbbm{E}{\left(f\prime (d)\right)}^2\\ {}=\frac{\sum \limits_d^Df{(d)}^2}{k^2{m}^2n}\end{array}} $$ The larger the parameters k and m are, the smaller the variance and the higher the utility are. However, space-time complexity increases when k and m are very large. Experimental section Experimental datasets The experiments use three datasets: two synthetic datasets and one real dataset; all datasets are a one-dimensional classification attribute. The real dataset uses the 2017 Integrated Public Use Microdata Series [36] (US) and selects the education level EDU attribute, which has 25 data categories; we extract 1% from the dataset and take the first million pieces of data as the experimental dataset. The synthetic dataset satisfies the uniform distribution and the Zipf distribution. The parameter a of the Zipf distribution is set to 1.2, and each synthetic dataset contains 100,000 pieces of data. Experimental competitors OLH is a better choice for our experiment as the comparison protocol because it gives near-optimal utility when the communication bandwidth is reasonable. In addition, we choose HCMS as another comparison protocol, which reduces the communication overhead by sending a single private bit at a time. Experimental implementation These protocols were implemented in Python 3.7 with NumPy and xxhash libraries and were performed on a PC with Intel Core i7-7700hq CPU and 16 GB RAM. Each experiment was repeated ten times to reduce the influence of contingency on the experimental results. Experimental metrics To analyze the utility of our protocol for different parameters and scenarios, we compare the error between the true distribution and the estimated distribution for frequency using the mean absolute percentage error (MAPE). For each data value, we calculate the absolute value between the estimated and true frequency, divide the absolute value by the true frequency, then cumulate these values and divide by the size of the data value domain. The definition of MAPE is as follows: $$ \mathrm{MAPE}=\frac{\sum \limits_{i=1}^{\mid D\mid}\mid \frac{y_i-{x}_i}{y_i}\mid }{\mid D\mid}\times 100\% $$ where |D| is the category attribute domain size, yi is the real frequency of the i-th attribute value, and xi is the estimated frequency of the i-th attribute value. The smaller the MAPE value is, the closer the estimated distribution is to the real distribution, and the better the data utility is. Effects of privacy budgets We validate the effects of privacy budgets parameter ε on the data utility of the LCS protocol through experiments, select the HCMS and OLH protocols as the control group and select the uniform and Zipf synthetic datasets, in which the classification attribute domain value is 50. For the uniform dataset, we select the number of hash functions k = 128 and the size of the hash map length m = 128 and verify the MAPE values of the three protocols with the variation of the privacy budget ε. As shown in Fig. 3 a, as the privacy budget ε increases, the MAPE value of these three protocols decreases; that is, the data utility increases when the privacy budget increases. The data utility of the LCS protocol is significantly better than that of HCMS and slightly lower than that of OLH when ε < 2.5 and higher than the other protocols when ε > 2.5. The effects of privacy budgets on MAPE. a The effects of privacy budgets in uniform datasets. b The effects of privacy budgets in Zipf datasets. c The effects of privacy budgets in the real dataset Then, we adjust the parameters of the upper group experiments and verify the effects of the privacy budget ε on the synthetic dataset satisfying the Zipf distribution; we select k = 256 and m = 512, as shown in Fig. 3 b. The data utility of LCS is superior to that of the HCMS protocol throughout the experiments, and the data utility of LCS is marginally lower than that of the OLH protocol when ε > 3. Next, we validate the effects of privacy budget ε on utility in a real dataset and perform experiments on the IPUMS dataset; we choose parameter k = 256 and m = 128. In Fig. 3 c, the experimental result shows that the utility of the LCS protocol is higher than that of HCMS and lower than that of OLH. It is verified that the protocol is also feasible in practical applications, and the utility is better than that of HCMS. Effects of data sizes The current LDP protocols exhibit dramatic changes in data utility when collecting data of different sizes. To verify the utility of the LCS protocol at different sizes of data, we compare the LCS protocol with the HCMS and OLH protocols with uniformly distributed synthetic data while varying data sizes. The parameters are set to m = 128, k = 1024, and ε = 2. As shown in Fig. 4 a, when the data size n is small, the OLH and HCMS protocols have large errors, and the utility of these protocols is very low, such as when n = 1000; thus, the HCMS and OLH protocols are not usable. However, LCS still maintains better data utility at different data sizes. Next, we verify the utility variation under different data sizes in the synthetic datasets satisfying the Zipf distribution, adjust the parameter settings to k = 256, m = 512, and ε = 2 and select the HCMS and OLH protocols for comparison. In Fig. 4 b, HCMS and OLH are not available when the data size is small, and the utility of LCS is better at varying experimental data sizes. The effects of different data sizes. a The effects of data sizes in the uniform datasets. b The effects of data sizes in the Zipf datasets Frequency estimation When the privacy budget parameter ε is small, too much noise is added during the perturbation process, thereby resulting in the estimated frequency being much lower than the original frequency. We set up experiments to observe the frequency estimation under different datasets. To clearly show the frequency distribution trend and facilitate observation, we calculate the estimated frequency of each attribute value and multiply the original data amount to obtain a more accurate frequency estimate. The parameters of LCS and HCMS are both set to k = 128, m = 1024, and ε = 2; in addition, we choose the Zipf synthetic dataset and the IPUMS dataset as experimental datasets. The domain sizes are 15 and 25, respectively. Figure 5 a shows that the estimated frequencies of the LCS and OLH protocols are close to the true value, and the overall fluctuation of the HCMS protocol is big. Figure 5 b shows that all protocols have fluctuations, but LCS has a small overall fluctuation. Frequency estimation observation. a Frequency estimation on Zipf dataset. b Frequency estimation on real dataset Effects of other parameters The above experiments show that the values of the parameters k and m have a definite impact on the final result. To explore the effect of the parameters on the performance of the protocol, we set up experiments with different k and m values. The LCS protocol uses the hash function to encode data, thereby creating hash collisions. Due to the hash collisions, different values are mapped to the same position under the same hash function, thereby decreasing estimation accuracy. When the length of hash map m is small, and the size of the data domain |D| is large, there will be larger errors. The frequency of hash collisions can be reduced by increasing the length of the hash map domain m; increasing the length of the hash map domain, in turn, increases the computational overhead. When the LDP protocols process different datasets, changes in the size of the data domain |D| also affect the estimated frequency. We utilize different data domain sizes to generate uniformly distributed datasets, and the protocol utility is tested under different domain sizes; we select k = 256, m = 512, and ε = 2. As shown in Fig. 6, the utility of the LCS protocol increases when the data domain size increases, and the HCMS and OLH protocols decrease when the data domain size increases. The effects of different domain size To verify the time complexity of the proposed protocol, we record its runtime in seconds on the real dataset of different sizes. We choose k = 256, m = 128, and ε = 2. In Table 1, the experimental result shows that the LCS protocol has a shorter runtime than OLH and HCMS in different sizes of the real dataset. Therefore, we can conclude that the LCS protocol has a much lower time complexity. Table 1 Runtime in seconds under different dataset sizes This paper focuses on human-centered computing in cloud, edge, and fog analyzes the ε-local differential privacy models without a trusted server. However, current LDP protocols have deficiencies in low utility and strict data size requirements. We propose the Laplace Count Sketch protocol, which cannot only protect sensitive data on the client side but also ensure high accuracy and utility, and discuss the reasons for the deficiencies of current LDP protocols. The experimental results show that the proposed protocol has high utility, is suitable for different sizes of datasets, and maintains its utility under different distributions and data domain sizes. The data dictionary for the datasets used in this paper is known; however, the proposed protocol cannot handle datasets with unknown data dictionaries. The next step is to study how to solve these problems, achieve better privacy protection, and protect sensitive data in human-centered computing. The relevant analysis data used to support the findings of this study are included in the article. DP: LDP: GRR: Generalized random response OLH: Optimal local hash RAPPOR: Randomized aggregatable privacy-preserving ordinal response HCMS: Hadamard count mean sketch LCS: Laplace count sketch MAPE: Mean absolute percentage error L. Qi, Q. He, F. Chen, et al., Finding All You Need: Web APIs Recommendation in Web of Things Through Keywords Search. IEEE Trans. Comput. Soc. Syst. 6(5), 1063–1072 (2019) X. Xu, Y. Li, T. Huang, et al., An energy-aware computation offloading method for smart edge computing in wireless metropolitan area networks. J. Net. Comput. Appl. 133, 75–85 (2019) X. Xu, Q. Liu, Y. Luo, et al., A computation offloading method over big data for IoT-enabled cloud-edge computing. Future Generation Comput. Syst. 95, 522–533 (2019) L. Qi, Y. Chen, Y. Yuan, et al., A QoS-aware virtual machine scheduling method for energy conservation in cloud-based cyber-physical systems. World Wide Web, 1–23 (2019). https://doi.org/10.1007/s11280-019-00684-y Y. Zhang, G. Cui, S. Deng, et al., Efficient Query of Quality Correlation for Service Composition. IEEE Trans.Serv. Comput. (2018). https://doi.org/10.1109/TSC.2018.2830773 Y. Zhang, K. Wang, Q. He, et al., Covering-based Web Service Quality Prediction via Neighborhood-aware Matrix Factorization. IEEE Trans. Serv. Comput. (2019). https://doi.org/10.1109/TSC.2019.2891517 Y. Zhang, C. Yin, Q. Wu, et al., Location-Aware Deep Collaborative Filtering for Service Recommendation. IEEE Transactions on Systems, Man, and Cybernetics. Systems (2019). https://doi.org/10.1109/TSMC.2019.2931723 X. Xu, Q. Liu, X. Zhang, et al., A blockchain-powered crowdsourcing method with privacy preservation in mobile environment. IEEE Trans. Comput. Soc. Syst. 6(6), 1407–1419 (2019). https://doi.org/10.1109/TCSS.2019.2909137 L. Qi, X. Zhang, W. Dou, et al., A two-stage locality-sensitive hashing based approach for privacy-preserving mobile service recommendation in cross-platform edge environment. Future Generation Comp. Syst. 88, 636–643 (2018) C. Dwork, F. McSherry, K. Nissim, et al., in Theory of Cryptography Conference. Calibrating noise to sensitivity in private data analysis, 265–284 (Springer, 2006) S. Ruggles, C. Fitch, D. Magnuson, et al., Differential privacy and census data: implications for social and economic research. AEA Pap Proc. 109, 403–408 (2019). https://doi.org/10.1257/pandp.20191107 Facebook's privacy problems: a roundup, https://www.theguardian.com/technology/2018/dec/14/facebook-privacy-problems-roundup. Accessed 10 Oct 2019. 'The Snappening' Is Real: 90,000 Private Photos and 9,000 Hacked Snapchat Videos Leak Online, https://www.thedailybeast.com/the-snappening-is-real-90000-private-photos-and-9000-hacked-snapchat-videos-leak-online?ref=scroll. Accessed 10 Oct 2019. N. Wang, X. Xiao, Y. Yang, et al., Collecting and Analyzing Multidimensional Data with Local Differential Privacy. 2019 IEEE 35th Int. Conf. Data Eng (ICDE), 638–649. IEEE (2019) Ú. Erlingsson, V. Pihur, A. Korolova, Rappor: Randomized Aggregatable Privacy-Preserving Ordinal Response. Proc. 2014 ACM SIGSAC Conf. Comput. Commun Secur - CCS '14, 1054–1067 (2014) Differential Privacy Team, Apple, Learning with Privacy at Scale. (2016) M.E. Gursoy, A. Tamersoy, S. Truex, et al., Secure and Utility-Aware Data Collection with Condensed Local Differential Privacy. arXiv preprint arXiv 1905, 06361 (2019) G. Cormode, S. Muthukrishnan, An improved data stream summary: the count-min sketch and its applications. J. Algorithms 55(1), 58–75 (2005). https://doi.org/10.1016/j.jalgor.2003.12.001 G. Fanti, V. Pihur, Ú. Erlingsson, Building a RAPPOR with the unknown: Privacy-preserving learning of associations and data dictionaries. Proc. Privacy Enhancing Technol. 2016(3), 41–61 (2016). https://doi.org/10.1515/popets-2016-0015 B. Ding, J. Kulkarni, S. Yekhanin, Collecting telemetry data privately. Adv. Neural Inf. Process. Syst., 3571–3580 (2017) T. Wang, B. Ding, J. Zhou, et al., Answering Multi-Dimensional Analytical Queries under Local Differential Privacy. Proc. 2019 Int. Conf. Manage. Data - SIGMOD '19, 159–176 (2019) T. Wang, J. Blocki, N. Li, et al., Locally differentially private protocols for frequency estimation. In, 26th USENIX Secur. Symp., 729–745 (2017) J. Acharya, Z. Sun, H. Zhang, Hadamard response: Estimating distributions privately, efficiently, and with little communication. arXiv preprint arXiv 1802, 04705 (2018) M. Joseph, A. Roth, J. Ullman, et al., Local differential privacy for evolving data. In, Adv. Neural Inf. Process. Syst., 2375–2384 (2018) T. Wang, Z. Li, N. Li, et al., Consistent and accurate frequency oracles under local differential privacy. arXiv preprint arXiv 1905, 08320 (2019) L. Qi, X. Zhang, S. Li, et al., Spatial-temporal data-driven service recommendation with privacy-preservation. Inform. Sci. 515, 91–102 (2019). https://doi.org/10.1016/j.ins.2019.11.021 X. Xu, Y. Xue, L. Qi, et al., An edge computing-enabled computation offloading method with privacy preservation for internet of connected vehicles. Future Generation Comput. Syst. 96, 89–100 (2019) R. Bassily, A. Smith, Local, private, efficient protocols for succinct histograms. In, Proc. Forty-seventh Annu. ACM Symp. Theory Comput., 127–135. ACM (2015) X. Ren, C.-M. Yu, W. Yu, et al., LoPub: High-Dimensional Crowdsourced Data Publication with Local Differential Privacy. IEEE Trans. Inform. Forensics Secur. 13(9), 2151–2166 (2018) T. Wang, M. Xu, B. Ding, et al., Practical and Robust Privacy Amplification with Multi-Party Differential Privacy. arXiv preprint arXiv 1908, 11515 (2019) X. Gu, M. Li, Y. Cheng, et al., PCKV: Locally Differentially Private Correlated Key-Value Data Collection with Optimized Utility. arXiv preprint arXiv 1911, 12834 (2019) K. Nissim, S. Raskhodnikova, A. Smith, Smooth sensitivity and sampling in private data analysis. In, Proc. Thirty-ninth Annu. ACM Symp Theory Comput., 75–84. ACM (2007) Y. Wang, X. Wu, D. Hu, in EDBT/ICDT Workshops. Using Randomized Response for Differential Privacy Preserving Data Collection, 1558–2016 (2016). A. Roth, C. Dwork, The algorithmic foundations of differential privacy. Foundations and Trends® in. Theor. Comput. Sci. 9(3-4), 211–407 (2014). https://doi.org/10.1561/0400000042 S.L. Warner, Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965). https://doi.org/10.1080/01621459.1965.10480775 S.F. Steven Ruggles, Ronald Goeken, Josiah Grover, Erin Meyer, Jose Pacas and Matthew Sobek.: IPUMS USA: Version 9.0 (IPUMS, Minneapolis, MN, 2019) This work was supported by the National Natural Science Foundation of China (61572034), Major Science and Technology Projects in Anhui Province (18030901025), Anhui Province University Natural Science Fund (KJ2019A0109). School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, 232001, China Xianjin Fang, Qingkui Zeng & Gaoming Yang Xianjin Fang Qingkui Zeng Gaoming Yang Correspondence to Gaoming Yang. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Fang, X., Zeng, Q. & Yang, G. Local differential privacy for human-centered computing. J Wireless Com Network 2020, 65 (2020). https://doi.org/10.1186/s13638-020-01675-8 Received: 22 December 2019 Accepted: 18 February 2020 Laplace noise Count sketch
CommonCrawl
Why are $\mu_0$ and $\epsilon_0$, which appear in electrostatics and magnetostatics, related to the speed of light which appears in electrodynamics? $\epsilon_0$ and $\mu_0$ appear in electrostatics and magnetostatics. When we include time varying fields we have electrodynamics and the appearance of c which turns out to be related to $\epsilon_0$ and $\mu_0$. My question is: is there an intuitive way to understand why although $\epsilon_0$ and $\mu_0$ are associated with non-time-varying phenomenas, yet they are related to c which appear when we have time varying fields. electromagnetism electricity speed-of-light physical-constants edited Dec 9 '11 at 0:53 Mark Eichenlaub RevoRevo $\begingroup$ possible duplicate of Deriving the speed of the propagation of a change in the Electromagnetic Field from Maxwell's Equations $\endgroup$ – Qmechanic♦ Dec 8 '11 at 23:40 The best way to understand this is through relativity. Magnetic fields are not constant anything--- they are not really static. They appear when charges are moving. This relates the two constants $\epsilon_0$ and $\mu_0$ by the following argument: Consider two charged parallel lines. They repel electrostatically by an amount determined by the field of a charged line at distance r from the center: $\rho\over 2\pi \epsilon_0 r$, so they get pushed apart by a force f per unit length which is equal to: $$f_E = {\rho^2 \over 2\pi \epsilon_0 R}$$ Where R is their separation. This force is repulsive. If you look at the same wires in a frame moving in the direction of the wire, the electric field is still static, $E={\rho'\over 2\pi \epsilon_0 r}$, where $\rho'$ is the boosted charge density, while there is now a static magnetic field given by Ampere's law $\mu_0 \rho' v \over 2\pi r$, and this creates a magnetic force by the Lorentz force: $$ f_B = {\mu_0 \rho'^2 \over 2\pi R} v^2 $$. And this force is attractive. The electric and magnetic forces have to cancel as the speed of your frame approaches the speed of light, in order for the total motion to slow down as required by relativistic time dilation. This means that at v=c, the two forces must be equal and opposite, so that $$ \mu_0 c^2 - {1\over\epsilon_0} = 0$$ So that the relation follows from relativity. Historically, of course, relativity came after. answered Dec 9 '11 at 6:20 Ron MaimonRon Maimon $\begingroup$ I would just like to say thank you to Ron. The simplest, yet one of the hardest thought experiment's, ever dreamt of in science. Today we simply take it for granted and forget its beauty. QFT in a nutshell. $\endgroup$ – Terry Giblin Dec 9 '11 at 13:20 $\begingroup$ Hi Terry--- there is no need to thank me, I didn't discover the relation, it was known in Maxwell's time. The thought experiment is a little original, but it isn't that deep, it's a teaching tool. You can just upvote the answer, or leave a comment that says "thanks". The answer space is best left for things that answer the question. But I appreciate the kind thoughts, thank you. The same argument works in GR to understand why parallel beams of light do not attract (or repel), but antiparallel beams do, or in string theory to understand why parallel branes are stable, but antiparallel ones not. $\endgroup$ – Ron Maimon Dec 10 '11 at 19:10 $\begingroup$ @TerryGiblin Please use comments for comments. $\endgroup$ – user68 Dec 15 '11 at 22:30 In order to answer your question, let's follow the derivation of the electromagnetic wave equations in reverse from the final wave equations describing the propagation of electromagnetic waves to the Maxwell equations from which they are derived. At the end of the derivation one indeed sees how ϵ0 and μ0 end up appearing where one normally expects the square of wave's speed: \begin{equation} \nabla^2 {\bf E} = \frac{1}{c^2} \frac{\partial^2 {\bf B}}{\partial^2 t} = \mu_0 \epsilon_0 \frac{\partial^2 {\bf B}}{\partial^2 t} \end{equation} \begin{equation} \nabla^2 {\bf E} = \frac{1}{c^2} \frac{\partial^2 {\bf E}}{\partial^2 t} = \mu_0 \epsilon_0 \frac{\partial^2 {\bf E}}{\partial^2 t} \end{equation} This shows how the speed of light in vacuum is related to ϵ0 and μ0. Going back, the derivation starts from the Maxwell equations, taken here in the absence of free charges and electric currents: \begin{equation} \nabla \cdot {\bf E} = 0 \end{equation} \begin{equation} \nabla \cdot {\bf B} = 0 \end{equation} \begin{equation} \nabla \times {\bf E} = - \frac{\partial {\bf B}}{\partial t} \end{equation} \begin{equation} \nabla \times {\bf B} = \mu_0 \epsilon_0 \frac{\partial {\bf E}}{\partial t} \end{equation} (Note that by going from magnetic B field to H field you can make μ0 disappear from the fourth and appear in the third equation.) This shows that it isn't entirely correct to say that ϵ0 and μ0 are associated with non-time-varying phenomena only: ϵ0 and μ0 do participate in the description of a time-varying phenomenon, namely how the curl of one field depends on the rate of change of the other field. If you write down Maxwell's equations in their full form without excluding free charges and currents, they'll look like this: \begin{equation} \nabla \cdot {\bf E} = \frac{\rho}{\epsilon_0} \end{equation} \begin{equation} \nabla \cdot {\bf B} = 0 \end{equation} \begin{equation} \nabla \times {\bf E} = - \frac{\partial {\bf B}}{\partial t} \end{equation} \begin{equation} \nabla \times {\bf B} = \mu_0 {\bf J} + \mu_0 \epsilon_0 \frac{\partial {\bf E}}{\partial t} \end{equation} The two extra appearances of ϵ0 and μ0 are not related to electromagnetic waves (as the derivation of the wave equations assumes they're zeroed out) and are actually what you have alluded to in your question, i.e. that the constants are primarily known from electrostatics and magnetism. As you can see they are involved in more than that including time-varying phenomena related to how time variation in one of the two fields generates the other. Adam ZalcmanAdam Zalcman $\begingroup$ This derivation assumes you know that the coefficient of the Maxwell term is $\mu_0\epsilon_0$. But this coefficient does not contribute in static situations. The question is why the static constants are related by the speed of light. This follows from Maxwell's displacement current argument, which gives the coefficient of the Maxwell term. $\endgroup$ – Ron Maimon Dec 9 '11 at 6:26 In addition to Ron Maimon's answer it is worth to mention that if you consider Maxwell equations in CGS system, $\varepsilon_0$ and $\mu_0$ do not come into those equations at all. Only speed of light does. Technically, these values are constants to describe relation between historically introduced units of (that time not connected) magnetic and electric field. There is no physics in these values themselves, only their product which is equal to $1/c^2$ is meaningful. MishaMisha $\begingroup$ How come two quantities separately are not physical while if multiplied the multiplication becomes physical? I never understood what it means whenever people say $\epsilon_0$ and $\mu_0$ are not physical (and what about $\epsilon$ and $\mu$ are they non physical too?) $\endgroup$ – Revo Dec 16 '11 at 15:38 $\begingroup$ @Revo It means that you can get rid of one of them completely if properly choose units. $\endgroup$ – Misha Dec 16 '11 at 16:53 $\begingroup$ How would we take into account the difference between dielectric material and air then if we are calculating the capacitance of a capacitor which is half filled with a dielectric material with permittivity $\epsilon$? $\endgroup$ – Revo Dec 16 '11 at 18:12 $\begingroup$ It was you who mentioned $\varepsilon$, not me. I was talking about $\varepsilon_0$ only. If you see a physical meaning in a dielectric permittivity of a vacuum (maybe you may choose another vacuum at a will) probably you could explain it to me? $\endgroup$ – Misha Dec 16 '11 at 18:54 Yes, but what is a magnetic field? It's an electric field that's changing. What is an electric field? It's a magnetic field that's changing. They are already linked to each other dynamically. The existence of one is equivalent to a variation in the other. Magnetism is already a relativistic effect of electricity, nothing more. The two are already interlinked, and linked with the speed of light, and with relativity, even without mentioning Einstein. http://van.physics.illinois.edu/qa/listing.php?id=2358 Florin AndreiFlorin Andrei $\begingroup$ It is incorrect to say that an electric field is a magnetic field that's changing or vice versa. if you have a constant magnetic field deceasing, there is always a center point where the electric field is zero, and the magnetic field is still changing there. $\endgroup$ – Ron Maimon Dec 9 '11 at 5:59 Not the answer you're looking for? Browse other questions tagged electromagnetism electricity speed-of-light physical-constants or ask your own question. Deriving the speed of the propagation of a change in the Electromagnetic Field from Maxwell's Equations Give an interpretation of what $c=\frac{1}{\sqrt{\varepsilon_0\mu_0}}$ actually means What was Feynman's "much better way of presenting the electrodynamics" — which did **not** appear in the Feynman lectures? Scalar and Vector Potential Why can the electric field be found with electrostatics methods if the charge is moving? What does electrostatic/magnetostatic approximation mean, exactly? Why is the divergence of electric field equal to $\rho \over \epsilon_0$ in electrodynamics? Is magnetic potential vector due to the magnetic field or energy stored in magnetic field? 'Electrostatic field of a circuit' What does it mean?
CommonCrawl
Forthcoming papers Izv. RAN. Ser. Mat.: Izv. RAN. Ser. Mat., 2001, Volume 65, Issue 2, Pages 155–186 (Mi izv330) This article is cited in 6 scientific papers (total in 6 papers) On the Brauer group of an arithmetic scheme S. G. Tankeev Vladimir State University Abstract: For an Enriques surface $V$ over a number field $k$ with a $k$-rational point we prove that the $l$-component of $\operatorname{Br}(V)/{\operatorname{Br}(k)}$ is finite if and only if $l\ne 2$. For a regular projective smooth variety satisfying the Tate conjecture for divisors over a number field, we find a simple criterion for the finiteness of the $l$-component of $\operatorname{Br}'(V)/{\operatorname{Br}(k)}$. Moreover, for an arithmetic model $X$ of $V$ we prove a variant of Artin's conjecture on the finiteness of the Brauer group of $X$. Applications to the finiteness of the $l$-components of Shafarevich–Tate groups are given. DOI: https://doi.org/10.4213/im330 Izvestiya: Mathematics, 2001, 65:2, 357–388 MSC: 14F22 Citation: S. G. Tankeev, "On the Brauer group of an arithmetic scheme", Izv. RAN. Ser. Mat., 65:2 (2001), 155–186; Izv. Math., 65:2 (2001), 357–388 \Bibitem{Tan01} \by S.~G.~Tankeev \paper On the Brauer group of an arithmetic scheme \jour Izv. RAN. Ser. Mat. \mathnet{http://mi.mathnet.ru/izv330} \crossref{https://doi.org/10.4213/im330} \jour Izv. Math. \crossref{https://doi.org/10.1070/im2001v065n02ABEH000330} http://mi.mathnet.ru/eng/izv330 https://doi.org/10.4213/im330 http://mi.mathnet.ru/eng/izv/v65/i2/p155 Cycle of papers Izv. RAN. Ser. Mat., 2001, 65:2, 155–186 On the Brauer group of an arithmetic scheme. II S. V. Tikhonov, V. I. Yanchevskii, "The indices of central simple algebras over function fields of projective spaces over $P_{n,r}$-fields", Sb. Math., 193:11 (2002), 1691–1705 S. G. Tankeev, "On the Brauer group of an arithmetic scheme. II", Izv. Math., 67:5 (2003), 1007–1029 S. G. Tankeev, "On the Conjectures of Artin and Shafarevich–Tate", Proc. Steklov Inst. Math., 241 (2003), 238–248 T. V. Zasorina, "On the Brauer group of an algebraic variety over a finite field", Izv. Math., 69:2 (2005), 331–343 S. G. Tankeev, "On the Finiteness of the Brauer Group of an Arithmetic Scheme", Math. Notes, 95:1 (2014), 122–133 S. G. Tankeev, "On the Brauer group of an arithmetic model of a hyperkähler variety over a number field", Izv. Math., 79:3 (2015), 623–644
CommonCrawl
Common fixed point of multifunctions on partial metric spaces S Mohammad Ali Aleomraninejad1, Inci M Erhan2, Marwan A Kutbi3 & Masoumeh Shokouhnia4 Fixed Point Theory and Applicationsvolume 2015, Article number: 102 (2015) | Download Citation In this paper, some multifunctions on partial metric space are defined and common fixed points of such multifunctions are discussed. The results presented in the paper generalize some of the existing results in the literature. Several conclusions of the main results are given. The notion of a partial metric space (PMS) was introduced by Matthews [1] in 1992 (see also [2]). The PMS is a generalization of the usual metric space in which $d(x,x)$ is no longer necessarily zero. Recently, many authors have focused on the PMS and its topological properties(see for example [3–15]). Partial metric spaces have extensive application potential in the research area of computer domains and semantics (see [15–18]). A partial metric is a function $p:X\times X\rightarrow[0,\infty)$ satisfying the following conditions: $p(x,y)=p(y,x)$ (symmetry), $x=y\Longleftrightarrow p(x,x)=p(x,y)=p(y,y)$ (equality), $p(x,x)\leq p(x,y)$ (small self-distances), $p(x,y)\leq p(x,z)+p(y,z)-p(z,z)$ (triangularity), for all $x,y,z\in X$. Then $(X,p)$ is called a partial metric space. Each partial metric p on X generates a $T_{0}$ topology $\tau_{p}$ on X with a base of the family of open p-balls $\lbrace B_{p}(x,\varepsilon):x\in X,\varepsilon>0\rbrace$, where $$B_{p}(x,\varepsilon)=\bigl\lbrace y\in X:p(x,y)< p(x,x)+\varepsilon \bigr\rbrace , $$ for all $x\in X$ and $\varepsilon>0$. For a partial metric p on X, the function $d_{p}:X\times X\rightarrow[0,\infty)$ given by $$d_{p}(x,y)=2p(x,y)-p(x,x)-p(y,y) $$ is a (usual) metric on X. Another metric on X induced by p is defined in [19] as $d(x,y)=p(x,y)$ whenever $x\neq y$ and $d(x,y)=0$ whenever $x=y$. Some topological concepts and basic results on a PMS are defined as follows. A sequence $\{ x_{n} \}_{n\geq1}$ in a PMS $(X,p)$ converges to $x\in X$ if and only if $p(x,x)=\lim_{n\rightarrow\infty} p(x, x_{n})$. A sequence $\{ x_{n} \}_{n\geq1}$ is called a Cauchy sequence if and only if $\lim_{n,m\rightarrow\infty}p(x_{n},x_{m})$ exists and is finite. A PMS $(X,p)$ is said to be complete whenever every Cauchy sequence $\{ x_{n} \}_{n\geq1}$ in X converges to a point x with respect to $\tau_{p}$, that is, $p(x,x)=\lim _{n,m\rightarrow\infty}p(x_{n},x_{m})$. Lemma 1.1 Let $(X,p)$ be a partial metric space. Then: A sequence $\{ x_{n} \}_{n\geq1}$ is Cauchy in a PMS $(X,p)$ if and only if $\{ x_{n} \}_{n\geq1}$ is Cauchy in the metric space $(X,d_{p})$. A PMS $(X,p)$ is complete if and only if the metric space $(X,d_{p})$ is complete. Moreover, $$\lim_{n\rightarrow\infty} d_{p}(x,x_{n})=0\quad \Longleftrightarrow\quad p(x,x)=\lim_{n\rightarrow\infty} p(x,x_{n})= \lim_{n,m\rightarrow\infty} p(x_{n},x_{m}). $$ An interesting property of partial metric spaces is the nonuniqueness of limits of sequences. To emphasize this property we consider the following example. Let $X=[0,\infty)$ and define a partial metric p on X as $$p(x,y)=\max\{x,y\}. $$ Consider the sequence $\{x_{n}\}=\{1+\frac{1}{n}\}$. Notice that $$\lim_{n\to\infty}p(x_{n},1)=\lim_{n\to\infty} \max\biggl\{ 1+\frac{1}{n},1\biggr\} =\lim_{n\to\infty} 1+ \frac{1}{n}=1. $$ $$\lim_{n\to\infty}p(x_{n},2)=\lim_{n\to\infty} \max\biggl\{ 1+\frac{1}{n},2\biggr\} =\lim_{n\to\infty} 2=2. $$ Moreover, for any $a\geq1$ we have $$\lim_{n\to\infty}p(x_{n},a)=a. $$ In what follows, we introduce the notions, notations, and assumptions used in the discussion. Throughout this paper, we suppose that $(X,p)$ is a partial metric space. We denote the family of all nonempty subsets of X by $2^{X}$, the family of all closed subsets of X by $C(X)$ and the family of all closed and bounded subsets of X by $\mathit{CB}(X)$. The partial Hausdorff distance $H_{p}$ on $\mathit{CB}(X)$ was introduced by Aydi et al. [20] as follows: $$ H_{p}(A,B)=\max\Bigl\{ \sup_{a\in A} p(a,B), \sup _{b\in B} p(b, A)\Bigr\} , $$ for all $A,B\in\mathit{CB}(X)$, where $$ p(x,A)=\inf_{a\in A} p(x,a). $$ Let $T:X\rightarrow2^{X}$ be a multi-valued function (multifunction). We denote the set of fixed points of T by $F(T)$, i.e., $$ F(T)=\{x\in X: x\in Tx\}. $$ Let $(X, p)$ be a partial metric space, $A \subseteq X$, and $x \in X$. Then $x \in\overline{A}$ if and only if $p(x,A) = p(x,x)$. In 2012, Aydi et al. [20] proved the following fixed point theorem on partial metric space. Let $(X,p)$ be a complete partial metric space and $T:X\rightarrow \mathit{CB}(X)$ a multifunction. Suppose that there exist $k\in(0,1)$ such that $$ H_{p}(Tx,Ty)\leq kp(x,y), $$ for all $x,y\in X$. Then T has a fixed point. Some fixed point theorems for multifunctions on metric space are given next (see [6, 21, 22]). Let $(X, d)$ be a complete metric space and $T : X \rightarrow \mathit{CB}(X)$ a multifunction. Assume that there exists $r \in[0, 1)$ such that $$ \frac{1}{1+r} d(x, Tx) \leq d(x, y)\quad\textit{implies} \quad H(Tx, Ty) \leq rd(x, y), $$ for all $x, y \in X$. Then T has a fixed point. Let $(X,d)$ be a complete metric space and $T : X \rightarrow C(X)$ a multifunction. Assume that there exist $a, b, c\in[0,1)$ such that $a+b+c<1 $ and $$ \frac{(1-b-c)}{1+a}d(x, Tx)\leq d(x,y) $$ $$ H(Tx, Ty) \leq ad(x,y)+bd(x,Tx)+cd(y, Ty), $$ for all $x,y \in X$. Then T has a fixed point. The aim of this paper is to provide a new, more general condition for the multifunction T which guarantees the existence of its fixed point. Our results generalize some of the existing ones. In what follows, we consider two classes of functions, namely, $R_{1}$ and $R_{2}$ as defined below. Let $R_{1}$ be the set of all continuous functions $g:\left.[0,\infty )\right.^{5}\rightarrow[0,\infty)$, satisfying the conditions: $g(t,t,t,2t,t)< t$, for all $t\in[0,\infty)$, g is subhomogeneous, i.e., $g(\alpha x_{1},\alpha x_{2},\alpha x_{3},\alpha x_{4},\alpha x_{5})\leq\alpha g(x_{1},x_{2},x_{3},x_{4},x_{5})$, for all $\alpha\geq0$, if $x_{i},y_{i}\in[0,\infty)$, $x_{i}\leq y_{i} $ for $i=1,\ldots ,5 $ we have $g(x_{1},x_{2},x_{3},x_{4},x_{5})\leq g(y_{1},y_{2},y_{3},y_{4},y_{5})$. Let $R_{2}$ be the set of all continuous function $g:\left.[0,\infty )\right.^{5}\rightarrow[0,\infty)$ satisfying the following conditions: $g(t,t,t,t,t)< t$, for all $t\in[0,\infty)$, g is subhomogeneous, if $x_{i},y_{i}\in[0,\infty)$, $x_{i}\leq y_{i} $ for $i=1,\ldots ,5 $ we have $g(x_{1},x_{2},x_{3},x_{4},x_{5})\leq g(y_{1},y_{2},y_{3},y_{4},x_{5})$, for all $0\leq a\leq x_{4}$, $g(x_{1},x_{2},x_{3},x_{4}-a,a)= g(x_{1},x_{2},x_{3},x_{4}, 0)$. Remark 1.1 It is easy to see that if $g\in R_{1}$, then $g(1,1,1,2,1)=h\in(0,1)$. Indeed, if $g\in R_{1}$, the conditions (i) and (ii) give $$ g(t,t,t,2t,t)\leq tg(1,1,1,2,1)< t, $$ which implies $g(1,1,1,2,1)=h\in(0,1)$. In addition, if $g\in R_{2}$, then by a similar argument we observe that $g(1,1,1,1,1)=h\in(0,1)$ Examples of functions from both classes are given below. The function $g(x_{1},x_{2},x_{3},x_{4},x_{5})=k \max\{x_{i}\} _{i=1}^{5}$ for $k\in(0,\frac{1}{2})$ is in class $R_{1}$. The function $g(x_{1},x_{2},x_{3},x_{4},x_{5})=k \max\{x_{1},x_{2},x_{3},\frac {x_{4}+x_{5}}{2}\}$ for $k\in(0,1)$ belongs to $R_{2}$. The following results are quite trivial. Proposition 1.6 If $g \in{R_{1}} $ and $u,v \in[0,\infty)$ are such that $$ u\leq\max\bigl\{ g(v,u,v,u,v),g(v,u,v,v+u,v)\bigr\} , $$ then $u\leq hv$, where $h=g(1,1,1,2,1)$. From (iii) it is clear that $g(v,u,v,u,v)\leq g(v,u,v,v+u,v)$ and hence $u\leq\max\{g(v,u,v, u,v),g(v,u,v,v+u,v)\}=g(v,u,v,v+u,v)$. If $v < u$, then $$u \leq g(v, u, v, v + u, v) \leq g(u, u, u, 2u, u) \leq u g(1, 1, 1, 2, 1) = hu < u, $$ which is a contradiction. Thus $u \leq v$, which implies $$u \leq g(v,u,v,v+u, v) \leq g(v, v, v, 2v, v) \leq v g(1, 1, 1, 2, 1) = hv. $$ $$ u\leq\max\bigl\{ g(v, u, v, u+v, 0),g(v, u, v, u, v)\bigr\} , $$ then $u\leq hv$ where $h=g(1,1,1,1,1)$. Let $\max\{g(v, u, v, u+v, 0),g(v, u, v, u, v)\}=g(v, u, v, u+v, 0)$. If $v < u$, then (d) implies $$u \leq g(v, u, v, v + u, 0) \leq g(u, u, u, 2u, 0)\leq ug(1, 1, 1, 2, 0) = u g(1,1,1,1,1)=hu < u, $$ which is a contradiction. Thus $u \leq v$, and hence, $$u \leq g(v, u, v, v+u, 0) \leq g(v, v, v, 2v, 0) \leq vg(1, 1, 1, 2, 0) = v g(1,1,1,1,1)=hv. $$ Let $\max\{g(v, u, v, u+v, 0),g(v, u, v, u, v)\}=g(v, u, v, u, v)$. If $v < u$, then $$u \leq g(v, u, v, u, v) \leq g(u, u, u, u, u)\leq u g(1,1,1,1,1)=hu < u. $$ This contradicts our assumption, that is, we should have $u \leq v$. Then $$u \leq g(v, u, v, u, v) \leq g(v,v,v,v,v) \leq v g(1,1,1,1,1)=hv, $$ which completes the proof. □ We state and proof our main results in this section. Let $(X,p)$ be a partial metric space and $T,S:X\rightarrow C(X)$ be two multifunctions. Suppose that there exist $\alpha\in(0,\infty)$ and $g\in{R_{1}\cup R_{2}}$ such that $\alpha p(x,Tx)\leq p(x,y)$ or $\alpha p(y,Sy)\leq p(x,y)$ implies $$ H_{p}(Tx,Sy)\leq g\bigl(p(x,y),p(y,Sy),p(x,Tx),p(x,Sy),p(y,Tx) \bigr), $$ for all $x,y\in X$. Then for every $x\in F(T)\cup F(S)$ we have $p(x,x)=0$. Without loss of generality, we can suppose that $x\in Tx$. Then $p(x,Tx)=p(x,x)$ and hence $$\begin{aligned} p(x,Sx) \leq&H_{p}(Tx,Sx)\leq g\bigl(p(x,x), p(x,Sx), p(x,Tx), p(x,Sx), p(x,Tx)\bigr) \\ \leq& g\bigl(p(x,x), p(x,Sx), p(x,x), p(x,Sx), p(x,x)\bigr). \end{aligned}$$ By using Proposition 1.6 if $g\in R_{1}$ or Proposition 1.7 if $g\in R_{2}$, we have $$p(x, x)\leq p(x, Sx) \leq h p(x, x). $$ However, since $h<1$ we have $p(x, x)=0$. □ Let $(X, p)$ be a partial metric space and $T,S:X\rightarrow C(X)$ be two multifunctions. Suppose that there exist $\alpha\in(0,\infty)$ and $g\in{R_{1} \cup R_{2}}$ such that $\alpha p(x,Tx)\leq p(x,y)$ or $\alpha p(y,Sy)\leq p(x,y)$ implies $$H_{p}(Tx,Sy)\leq g\bigl(p(x,y),p(y,Sy),p(x,Tx),p(x,Sy),p(y,Tx)\bigr), $$ for all $x,y\in X$. Then ${F}(T)={F}(S)$. If $x\in Tx$, then $p(x,Tx)=p(x,x)=0$ by Lemma 2.1. Hence, $$\begin{aligned} p(x,Sx) \leq& H_{p}(Tx,Sx)\leq g\bigl(p(x,x),p(x,Sx),p(x,Tx),p(x,Sx),p(x,Tx) \bigr) \\ \leq&g\bigl(p(x,x),p(x,Sx),p(x,x),p(x,Sx),p(x,x)\bigr) \\ \leq&g\bigl(0,p(x,Sx),0,p(x,Sx),0\bigr). \end{aligned}$$ By using Proposition 1.6 whenever $g\in R_{1}$ or Proposition 1.7 in case $g\in R_{2}$, we have $p(x,Sx)\leq h0=0$, and thus $x\in F(S)$. Thus, $F(T)\subseteq F(S)$. Similarly, we can show that $F(S)\subseteq F(T)$, which completes the proof. □ In what follows, we state our main existence result. Let $(X,p)$ be a complete partial metric space and $T,S:X\rightarrow C(X)$ be two multifunctions. Suppose that there exist $g\in{R_{1}\cup R_{2}} $ and $\alpha\in(0,1)$, such that $\alpha (h+1)\leq1$ where $h=g(1,1,1,2,1)$ if $g\in R_{1}$ and $h=g(1,1,1,1,1)$ if $g\in R_{2}$. Suppose also that $\alpha p(x,Tx)\leq p(x,y)$ or $\alpha p(y,Sy)\leq p(x,y)$ implies for all $x,y\in X$. Then $F(T)=F(S)$ and $F(T)$ is nonempty. By Lemma 2.2 we already have $F(T)=F(S)$. Fix arbitrary $1>r>h$ and $x_{0}\in X$ and choose $x_{1}\in Tx_{0}$ such that $\alpha p(x_{0},Tx_{0})< p(x_{0},x_{1})$. Then by the hypothesis of the theorem and condition (iii) or (c) in Definition 1.2, respectively, we have $$\begin{aligned} p(x_{1},Sx_{1}) \leq& H_{p}(Tx_{0},Sx_{1}) \leq g\bigl(p(x_{0},x_{1}),p(x_{1},Sx_{1}),p(x_{0},Tx_{0}),p(x_{0},Sx_{1}),p(x_{1},Tx_{0}) \bigr) \\ \leq& g\bigl(p(x_{0},x_{1}),p(x_{1},Sx_{1}),p(x_{0},x_{1}),p(x_{0},Sx_{1}),p(x_{1},x_{1}) \bigr) \\ \leq& g\bigl(p(x_{0},x_{1}),p(x_{1},Sx_{1}),p(x_{0},x_{1}),p(x_{0},x_{1})+p(x_{1},Sx_{1})-p(x_{1},x_{1}),p(x_{1},x_{1}) \bigr) \\ \leq& g\bigl(p(x_{0},x_{1}),p(x_{1},Sx_{1}),p(x_{0},x_{1}),p(x_{0},x_{1})+p(x_{1},Sx_{1}),p(x_{1},x_{1}) \bigr), \end{aligned}$$ where obviously $p(x_{0},Sx_{1})\leq p(x_{0},x_{1})+p(x_{1},Sx_{1})-p(x_{1},x_{1})$ due to triangle inequality in PMS. Suppose that $g\in{R_{1}}$. Since $$p(x_{1},Sx_{1})\leq g\bigl(p(x_{0},x_{1}),p(x_{1},Sx_{1}),p(x_{0},x_{1}),p(x_{0},x_{1})+p(x_{1},Sx_{1}),p(x_{0},x_{1}) \bigr), $$ then, by Proposition 1.6, we have $$ p(x_{1},Sx_{1})\leq hp(x_{0},x_{1})< rp(x_{0},x_{1}). $$ Now let $g\in{R_{2}}$. Since $$p(x_{1},Sx_{1})\leq g\bigl(p(x_{0},x_{1}),p(x_{1},Sx_{1}),p(x_{0},x_{1}),p(x_{0},x_{1})+p(x_{1},Sx_{1})-p(x_{1},x_{1}),p(x_{1},x_{1}) \bigr), $$ and obviously $0\leq p(x_{1},x_{1})\leq p(x_{0},x_{1})+p(x_{1},Sx_{1})$, we let $a=p(x_{1},x_{1})$ and employ condition (d) in Definition 1.2 to get $$p(x_{1},Sx_{1})\leq g\bigl(p(x_{0},x_{1}),p(x_{1},Sx_{1}),p(x_{0},x_{1}),p(x_{0},x_{1})+p(x_{1},Sx_{1}),0 \bigr). $$ Now by Proposition 1.7, we have We choose a number μ such that $\inf_{y\in Sx_{1}} p(x_{1},y)=p(x_{1},Sx_{1})<\mu<r p(x_{0},x_{1})$. Thus there exists $x_{2}\in Sx_{1}$ such that $p(x_{1},x_{2})<\mu<rp(x_{0},x_{1})$. Since $\alpha p(x_{1},Sx_{1})< p(x_{1},x_{2})$, by using (14) and the properties of the function g we have $$\begin{aligned} p(x_{2},Tx_{2}) \leq&H_{p}(Sx_{1},Tx_{2}) \leq g\bigl(p(x_{1},x_{2}),p(x_{2},Tx_{2}),p(x_{1},Sx_{1}),p(x_{1},Tx_{2}),p(x_{2},Sx_{1}) \bigr) \\ \leq& g\bigl(p(x_{1},x_{2}),p(x_{2},Tx_{2}),p(x_{1},x_{2}),p(x_{1},x_{2})+p(x_{2},Tx_{2})-p(x_{2},x_{2}),p(x_{2},x_{2}) \bigr) \\ \leq&g\bigl(p(x_{1},x_{2}),p(x_{2},Tx_{2}),p(x_{1},x_{2}),p(x_{1},x_{2})+p(x_{2},Tx_{2}),p(x_{1},x_{2}) \bigr). \end{aligned}$$ Now, if $g\in R_{1}$, using Proposition 1.6 and mimicking the proof of (15) we obtain $$ p(x_{2},Tx_{2})\leq hp(x_{1},x_{2})< rp(x_{1},x_{2}). $$ If $g\in R_{2}$, letting $a=p(x_{2},x_{2})$ we get $$\begin{aligned} p(x_{2},Tx_{2}) \leq& g\bigl(p(x_{1},x_{2}),p(x_{2},Tx_{2}),p(x_{1},x_{2}),p(x_{1},x_{2})+p(x_{2},Tx_{2})- p(x_{2},x_{2}),p(x_{2},x_{2})\bigr) \\ \leq&g\bigl(p(x_{1},x_{2}),p(x_{2},Tx_{2}),p(x_{1},x_{2}),p(x_{1},x_{2})+p(x_{2},Tx_{2}),0 \bigr), \end{aligned}$$ and hence Proposition 1.7 yields In a similar way, we can choose $x_{3}\in Tx_{2}$ such that $$ p(x_{2},x_{3})< rp(x_{1},x_{2})< r^{2}p(x_{0},x_{1}). $$ By continuing this process, we obtain a sequence $\{x_{n}\}_{n\geq1}$ in X such that $$ x_{2n-1}\in Tx_{2n-2}, \qquad x_{2n} \in Sx_{2n-1}, $$ which satisfies $$ p(x_{n},x_{n+1})\leq r^{n} p(x_{0},x_{1}). $$ Then $p(x_{2n},Tx_{2n})\leq h p(x_{2n-1},x_{2n})$ and $p(x_{2n-1},Sx_{2n-1})\leq hp(x_{2n-2},x_{2n-1})$. If $x_{m}=x_{m+1}$ for some $m\geq1$ where $m=2k$, then $$p(x_{2k},x_{2k})\leq p(x_{2k},Tx_{2k}) \leq p(x_{2k},x_{2k+1})=p(x_{2k},x_{2k}), $$ so $p(x_{2k},Tx_{2k})=p(x_{2k},x_{2k})$, and hence $x_{2k}\in Tx_{2k}$. Thus T and S have a fixed point. If $m=2k+1$ in a similar way we find that T and S have a fixed point. Suppose that $x_{n}\neq x_{n+1}$, for all $n\geq1$. Repeated application of the triangle inequality implies $$\begin{aligned} p(x_{n},x_{n+m}) \leq&p(x_{n},x_{n+1})+p(x_{n+1},x_{n+2})+ \cdots+p(x_{n+m-1},x_{n+m}) \\ \leq&r^{n} p(x_{0},x_{1})+r^{n+1} p(x_{0},x_{1})+\cdots+r^{n+m-1} p(x_{0},x_{1}) \\ \leq&r^{n} p(x_{0},x_{1}) \bigl(1+r+r^{2}+ \cdots+r^{m-1}\bigr) \\ \leq& \frac{ r^{n}}{1-r} p(x_{0},x_{1}). \end{aligned}$$ Then we get $$\lim_{n\rightarrow\infty} p(x_{n},x_{n+m})\rightarrow0, $$ and hence $\{x_{n}\}_{n\geq1}$ is a Cauchy sequence in $(X,p)$. Regarding Lemma 1.1, $\{x_{n}\}_{n\geq1}$ is also a Cauchy sequence in $(X,d_{p})$. Since $(X,p)$ is a complete partial metric space, by Lemma 1.1, $(X,d_{p})$ is also complete. Thus $\{x_{n}\} _{n\geq1}$ converges to a limit, say, $x\in X$, that is, $$ \lim_{n\rightarrow\infty} d_{p}(x_{n},x)=0. $$ Notice that Lemma 1.1 yields $$ p(x,x)=\lim_{n\rightarrow\infty} p(x_{n},x)=\lim _{n,m\rightarrow\infty} p(x_{n},x_{m})=0. $$ Now, we claim that for each $n\geq1$ one of the relations $$ \alpha p(x_{2n}, Tx_{2n})\leq p(x_{2n},x)\quad\text{or} \quad\alpha p(x_{2n+1},Sx_{2n+1}) \leq p(x_{2n+1},x) $$ holds. If for some $n\geq1$ we have $\alpha p(x_{2n}, Tx_{2n})> p(x_{2n},x)$ and $\alpha p(x_{2n+1},Sx_{2n+1})> p(x_{2n+1},x)$ then $$\begin{aligned} p(x_{2n},x_{2n+1}) \leq&p(x_{2n},x)+p(x,x_{2n+1}) \\ < & \alpha p(x_{2n},Tx_{2n})+\alpha p(x_{2n+1}, Sx_{2n+1}) \\ \leq& \alpha p(x_{2n}, x_{2n+1})+\alpha h p(x_{2n},x_{2n+1}). \end{aligned}$$ This results in $\alpha(h+1)> 1$, which contradicts the initial assumption. Hence, our claim is proved. Observe that by the assumption of the theorem, for each $n\geq1$ we have either $$H_{p}(Tx_{2n},Sx)\leq g\bigl(p(x_{2n},x),p(x,Sx),p(x_{2n},Tx_{2n}),p(x_{2n},Sx),p(x,Tx_{2n}) \bigr) $$ $$H_{p}(Sx_{2n+1},Tx)\leq g\bigl(p(x_{2n+1},x),p(x,Tx),p(x_{2n+1},Sx_{2n+1}),p(x_{2n+1},Tx),p(x,Sx_{2n+1}) \bigr). $$ Therefore, one of the following cases holds. Case (i). There exists an infinite subset $I\subseteq\mathbb{N}$ such that $$\begin{aligned} p(x_{2n+1},Sx) \leq&H_{p}(Tx_{2n},Sx) \\ \leq& g\bigl(p(x_{2n},x),p(x,Sx),p(x_{2n},Tx_{2n}),p(x_{2n},Sx),p(x,Tx_{2n}) \bigr), \end{aligned}$$ for all $n\in I$. Case (ii). There exists an infinite subset $J\subseteq\mathbb{N}$ such that $$\begin{aligned} p(x_{2n+2},Tx) \leq&H_{p}(Sx_{2n+1},Tx) \\ \leq&g\bigl(p(x_{2n+1},x),p(x,Tx),p(x_{2n+1},Sx_{2n+1}),p(x_{2n+1},Tx),p(x,Sx_{2n+1}) \bigr), \end{aligned}$$ for all $n\in J$. In Case (i), we get $$\begin{aligned} p(x,Sx) \leq& p(x,x_{2n+1})+p(x_{2n+1},Sx) \\ \leq& p(x, x_{2n+1})+ g\bigl(p(x_{2n},x),p(x,Sx),p(x_{2n},Tx_{2n}),p(x_{2n},Sx),p(x,Tx_{2n}) \bigr) \\ \leq& p(x, x_{2n+1}) \\ &{}+g\bigl(p(x_{2n},x),p(x,Sx),p(x_{2n},x_{2n+1}),p(x_{2n},x)+p(x,Sx)-p(x,x),p(x,x_{2n+1}) \bigr), \end{aligned}$$ for all $n\in I$. Continuity of g implies $$ p(x,Sx)\leq g\bigl(0,p(x,Sx),0,0+p(x,Sx)-0,0\bigr). $$ Now by using Propositions 1.6 and 1.7, we have $p(x,Sx)=0$, and thus $x\in Sx$. In Case (ii), we have $$\begin{aligned} p(x,Tx) \leq&p(x,x_{2n+2})+p(x_{2n+2},Tx) \\ \leq& p(x,x_{2n+2})+g\bigl(p(x_{2n+1},x),p(x,Tx),p(x_{2n+1},Sx_{2n+1}),p(x_{2n+1},Tx),p(x,Sx_{2n+1}) \bigr) \\ \leq& p(x,x_{2n+2}) \\ &{}+ g\bigl(p(x_{2n+1},x),p(x,Tx),p(x_{2n+1},x_{2n+2}), \\ &p(x_{2n+1},x)+p(x,Tx)-p(x,x),p(x,x_{2n+2}) \bigr), \end{aligned}$$ for all $n\in J$. Since g is continuous, we obtain $$ p(x,Tx)\leq g\bigl(0,p(x,Tx),0,0+p(x,Tx)-0,0\bigr). $$ Again, by using Propositions 1.6 and 1.7, we have $p(x,Tx)=0$, which gives $x\in Tx$. This completes the proof. □ The following results are consequences of Theorem 2.3. Let $(X,p)$ be a complete partial metric space and $T:X\rightarrow C(X)$ be a multifunction. Suppose that there exist $\alpha\in(0,1)$ and $g\in{R}$ with $h=g(1,1,1,2,0)$ such that $\alpha (h+1)\leq1$ and $\alpha p(x,Tx)\leq p(x,y)$ implies $$ H_{p} (Tx,Ty)\leq g\bigl(p(x,y),p(y,Ty),p(x,Tx),p(x,Ty),p(y,Tx) \bigr), $$ Theorem 1.3 introduced in [20] is a special case of Theorem 2.4. Define $g\in{R_{1}}$ by $g(x_{1},x_{2},x_{3},x_{4},x_{5})=kx_{1}$. □ Now we provide the partial metric versions of Theorems 1.4 and 1.5. Let $(X, p)$ be a complete partial metric space and $T : X \rightarrow\mathit{CB}(X)$ be a multifunction. Assume that there exists $r \in[0, 1) $ such that $$ \frac{1}{1+r} p(x, Tx) \leq p(x, y)\quad\textit{implies} \quad H_{p}(Tx, Ty) \leq rp(x, y), $$ Define $g\in{R_{1}}$ by $g(x_{1},x_{2},x_{3},x_{4},x_{5})=rx_{1}$. Let $\alpha =\frac{1}{1+r}$. Since $h=r$ and $\alpha(1+h)\leq1$, by using Theorem 2.3, T has a fixed point. □ Let $(X,p)$ be a complete partial metric space and $T : X \rightarrow C(X)$ be a multifunction. Assume that there exist $a, b, c\in[0,1)$ such that $a+ b+c<1 $ and $$ \begin{aligned} &\frac{(1 - b - c)}{1+a}p(x, Tx)\leq p(x,y)\quad\textit {implies} \\ &H_{p}(Tx, Ty) \leq ap(x,y)+bp(x,Tx)+cp(y, Ty). \end{aligned} $$ Define $g \in{ R_{1}}$ by $g(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}) = ax_{1} + cx_{2} + bx_{3}$. Let $\alpha=\frac{1-b-c}{1+a}$. Since $h = a + b + c$ and $\alpha(1 + h) \leq1$, by Theorem 2.3, T has a fixed point. □ Matthews, SG: Partial metric topology. Research Report 212, Dept. of Computer Science, University of Warwick (1992) Matthews, SG: Partial metric topology. In: Proc. 8th Summer Conference on General Topology and Applications. Annals of the New York Academy of Sciences, vol. 728, pp. 183-197 (1994) Abdeljawad, T, Karapınar, E, Tas, K: Existence and uniqueness of a common fixed point on partial metric spaces. Appl. Math. Lett. 24, 1900-1904 (2011) Altun, I, Sola, F, Şimşek, H: Generalized contractions on partial metric spaces. Topol. Appl. 157, 2778-2785 (2010) Karapınar, E: Some fixed point theorems on the class of comparable partial metric spaces. Appl. Gen. Topol. 12(2), 187-192 (2011) Aleomraninejad, SMA, Rezapour, S, Shahzad, N: On fixed point generalizations of Suzuki's method. Appl. Math. Lett. 24, 1037-1040 (2011) Oltra, S, Valero, O: Banach's fixed point theorem for partial metric spaces. Rend. Ist. Mat. Univ. Trieste 36, 17-26 (2004) Valero, O: On Banach fixed point theorems for partial metric spaces. Appl. Gen. Topol. 6, 229-240 (2005) Altun, I, Erduran, A: Fixed point theorems for monotone mappings on partial metric spaces. Fixed Point Theory Appl. 2011, Article ID 508730 (2011) Romaguera, S: A Kirk type characterization of completeness for partial metric spaces. Fixed Point Theory Appl. 2010, Article ID 493298 (2010) Aydi, H, Karapınar, E: A Meir-Keeler common type fixed point theorem on partial metric spaces. Fixed Point Theory Appl. 2012, Article ID 26 (2012) Karapınar, E: Generalization of Caristi-Kirk's theorems on partial metric spaces. Fixed Point Theory Appl. 2011, Article ID 4 (2011) Karapınar, E, Erhan, IM, Yıldız, UA: Fixed point theorem for cyclic maps on partial metric spaces. Appl. Math. Inf. Sci. 6, 239-244 (2012) Karapınar, E, Chi, KP, Thanh, TD: A generalization of Ćirić quasicontractions. Abstr. Appl. Anal. 2012, Article ID 518734 (2012) Romaguera, S, Schellekens, M: Weightable quasi-metric semigroup and semilattices. Electron. Notes Theor. Comput. Sci. 40, 347-358 (2001) Kopperman, R, Matthews, SG, Pajoohesh, H: What do partial metric represent? In: Spatial Representation: Discrete vs. Continuous Computational Models. Dagstuhl Seminar Proceedings, vol. 4351. Internationales Begegnungs und Forschungszentrum fur Informatik (IBFI), Schloss Dagstuhl, Wadern (2005) Kunzi, HPA, Pajoohesh, H, Schellekens, MP: Partial quasi-metrics. Theor. Comput. Sci. 365(3), 237-246 (2006) Schellekens, M: A characterization of partial metrizability: domains are quantifiable. Theor. Comput. Sci. 305(1-3), 409-432 (2003) Haghi, RH, Rezapour, S, Shahzad, N: Be careful on partial metric fixed point results. Topol. Appl. 160, 450-454 (2013) Aydi, H, Abbas, M, Vetro, C: Partial Hausdorff metric and Nadler's fixed point theorem on partial metric spaces. Topol. Appl. 159, 3234-3242 (2012) Choudhury, BS, Konar, P, Rhoades, BE, Metiya, N: Fixed point theorems for generalized weakly contractive mappings. Nonlinear Anal. 74, 2116-2126 (2011) Dhompongsa, S, Yingtaweesittikul, H: Fixed point for multivalued mappings and the metric completeness. Fixed Point Theory Appl. 2009, Article ID 972395 (2009) The authors would like to thank referees for their useful comments and suggestions for the improvement of the paper. The third author gratefully acknowledges the support from the Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU) in Jeddah, Kingdom of Saudi Arabia during this research. Department of Mathematics, Qom University of Technology, Qom, Iran S Mohammad Ali Aleomraninejad Department of Mathematics, Atilim University, İncek, 06836, Ankara Inci M Erhan Marwan A Kutbi Department of Mathematics, Alzahra University, Tehran, Iran Masoumeh Shokouhnia Search for S Mohammad Ali Aleomraninejad in: Search for Inci M Erhan in: Search for Marwan A Kutbi in: Search for Masoumeh Shokouhnia in: Correspondence to Inci M Erhan. All authors contributed equally to this work. All authors read and approved the final manuscript. 46T99 partial metric space
CommonCrawl
Proof that (1/2,1/2) Lorentz group representation is a 4-vector Taken from Quantum Field Theory in a Nutshell by Zee, problem II.3.1: Show by explicit computation that $(\frac{1}{2},\frac{1}{2})$ is indeed the Lorentz vector. This has been asked here: How do I construct the $SU(2)$ representation of the Lorentz Group using $SU(2)\times SU(2)\sim SO(3,1)$ ? but I can't really digest the formality of this answer with only a little knowledge of groups and representations. By playing around with the Lorentz group generators it is possible to find the basis $J_{\pm i}$ that separately have the Lie algebra of $SU(2)$, and thus can be separately given spin representations. My approach has been to write $$J_{+i}=\frac{1}{2}(J_{i}+iK_{i})=\frac{1}{2}\sigma_{i}$$ $$J_{-i}=\frac{1}{2}(J_{i}-iK_{i})=\frac{1}{2}\sigma_{i}$$ which implies that $$J_{i}=\sigma_{i}$$ $$K_{i}=0$$ However I don't really get where to go next. group-theory group-representations representation-theory lie-algebra lorentz-symmetry WatwWatw $\begingroup$ Related question by OP: physics.stackexchange.com/q/321276/2451 and links therein. $\endgroup$ – Qmechanic♦ Mar 26 '17 at 10:53 $\begingroup$ The isomorphism can be seen by realizing that the matrices $\sigma^\mu_{\alpha {\dot \beta}}$ forms a basis for all $2\times2$ matrices so an arbitrary matrix $A_{\alpha {\dot \beta}}$ can be written as $A_{\alpha {\dot \beta}} = A_\mu \sigma^\mu_{\alpha {\dot \beta}}$. The LHS of this transforms in the $(0,\frac{1}{2}) \otimes (\frac{1}{2} ,0) = ( \frac{1}{2} , \frac{1}{2})$. This equation tells you what is the change of the basis between the usual vector basis $A_\mu$ and the $(\frac{1}{2},\frac{1}{2})$ basis. $\endgroup$ – Prahar Sep 19 '17 at 2:51 First let's recall how to construct the finite dimensional irreducible representations of the Lorentz group. Say $J_i$ are the three rotation generators and $K_i$ are the three boost generators. \begin{align*} L_x = &\begin{pmatrix} 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&-1 \\ 0&0&1&0 \end{pmatrix}& L_y = &\begin{pmatrix} 0&0&0&0 \\ 0&0&0&1 \\ 0&0&0&0 \\ 0&-1&0&0 \end{pmatrix}& L_z = &\begin{pmatrix} 0&0&0&0 \\ 0&0&-1&0 \\ 0&1&0&0 \\ 0&0&0&0 \end{pmatrix}\\ K_x = &\begin{pmatrix} 0&1&0&0 \\ 1&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \end{pmatrix}& K_y = &\begin{pmatrix} 0&0&1&0 \\ 0&0&0&0 \\ 1&0&0&0 \\ 0&0&0&0 \end{pmatrix}& K_z = &\begin{pmatrix} 0&0&0&1 \\ 0&0&0&0 \\ 0&0&0&0 \\ 1&0&0&0 \end{pmatrix}\\ \end{align*} They satisfy $$ [J_i, J_j] = \varepsilon_{ijk} J_k \hspace{1 cm} [K_i, K_j] = -\varepsilon_{ijk} J_k \hspace{1 cm} [J_i, K_j] = \varepsilon_{ijk}K_k. $$ (Note that I am using the skew-adjoint convention for Lie algebra elements where I did not multiply by $i$.) We then define $$ A_i = \frac{1}{2} (J_i - i K_i) \hspace{2 cm} B_i = \frac{1}{2}(J_i + i K_i) $$ which satisfy the commutation relations $$ [A_i, A_j] = \varepsilon_{ijk} A_k \hspace{2cm} [B_i, B_j] = \varepsilon_{ijk} B_k \hspace{2cm} [A_i, B_j] = 0. $$ Here is how you construct the representation of the Lorentz group: first, choose two non-negative half integers $j_1$ and $j_2$. These correspond to two spin $j$ representations of $\mathfrak{su}(2)$, which I will label $$ \pi'_{j}. $$ Recall that that $$ \mathfrak{su}(2) = \mathrm{span}_\mathbb{R} \{ -\tfrac{i}{2} \sigma_x, -\tfrac{i}{2} \sigma_y, -\tfrac{i}{2} \sigma_z \} $$ where $$ [-\tfrac{i}{2} \sigma_i, -\tfrac{i}{2} \sigma_j] = -\tfrac{i}{2}\varepsilon_{ijk} \sigma_k. $$ For this question, we only need to know the spin $1/2$ representation of $\mathfrak{su}(2)$, which is given by $$ \pi'_{\tfrac{1}{2}}( -\tfrac{i}{2}\sigma_i) = -\tfrac{i}{2} \sigma_i. $$ So okay, how do we construct the $(j_1, j_2)$ representation of the Lorentz group? Any Lie algebra element $X \in \mathfrak{so}(1,3)$ can written as a linear combination of $A_i$ and $B_i$: $$ X = \sum_{i = 1}^3 (\alpha_i A_i + \beta_i B_i). $$ (Note that we are actually dealing with the complexified version of the Lie algebra $\mathfrak{so}(1,3)$ because our definitions of $A_i$ and $B_i$ have factors of $i$, so $\alpha, \beta \in \mathbb{C}$.) $A_i$ and $B_i$ form their own independent $\mathfrak{su}(2)$ algebras. The Lie algebra representation $\pi'_{(j_1, j_2)}$ is then given by \begin{align*} \pi'_{(j_1, j_2)}(X) &= \pi'_{(j_1, j_2)}(\alpha_i A_i + \beta_j B_i) \\ &\equiv \pi'_{j_1}(\alpha_i A_i) \otimes \big( \pi'_{j_2}(\beta_j B_i) \big)^* \end{align*} where the star denotes complex conjugation. Sometimes people forget to mention that you have to include the complex conjugation, but it won't work otherwise! If $j_1 = 1/2$ and $j_2 = 1/2$, we have \begin{equation*} \pi_{\frac{1}{2}}'(A_i) = -\frac{i}{2}\sigma_i \otimes I \hspace{2cm} \big(\pi_{\frac{1}{2}}' (B_i)\big)^* = \frac{i}{2} I \otimes\sigma_i^*. \end{equation*} We can explicitly write out these tensor products in terms of a $2 \times 2 = 4$ dimensional basis. (Here I am using the so-called "Kronecker Product" to do this. That just a fancy name for multiplying all the elements of two $2\times 2$ cell-wise to get a $4 \times 4$ matrix.) \begin{align*} \pi_{(\frac{1}{2},\frac{1}{2})}'(A_x) &= -\frac{i}{2}\begin{pmatrix} 0&0&1&0 \\ 0&0&0&1 \\ 1&0&0&0 \\ 0&1&0&0 \end{pmatrix} & \big(\pi_{(\frac{1}{2},\frac{1}{2})}'(B_x)\big)^* &= \frac{i}{2}\begin{pmatrix} 0&1&0&0 \\ 1&0&0&0 \\ 0&0&0&1 \\ 0&0&1&0 \end{pmatrix} \\ \pi_{(\frac{1}{2},\frac{1}{2})}'(A_y) &= \frac{1}{2}\begin{pmatrix} 0&0&-1&0 \\ 0&0&0&-1 \\ 1&0&0&0 \\ 0&1&0&0 \end{pmatrix} & \big(\pi_{(\frac{1}{2},\frac{1}{2})}'(B_y)\big)^*&= \frac{1}{2}\begin{pmatrix} 0&-1&0&0 \\ 1&0&0&0 \\ 0&0&0&-1 \\ 0&0&1&0 \end{pmatrix} \\ \pi_{(\frac{1}{2},\frac{1}{2})}'(A_z) &= -\frac{i}{2}\begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&-1&0 \\ 0&0&0&-1 \end{pmatrix} & \big(\pi_{(\frac{1}{2},\frac{1}{2})}'(B_z)\big)^* &= \frac{i}{2}\begin{pmatrix} 1&0&0&0 \\ 0&-1&0&0 \\ 0&0&1&0 \\ 0&0&0&-1 \end{pmatrix} \end{align*} We can then write out the matrices of the rotations and boosts $J_i$ and $K_i$ using $$ J_i = A_i + B_i \hspace{2cm} K_i = i(A_i - B_i). $$ \begin{align*} \pi'_{(\frac{1}{2},\frac{1}{2})}(J_x) &= \frac{i}{2}\begin{pmatrix} 0&1&-1&0 \\ 1&0&0&-1 \\ -1&0&0&1 \\ 0&-1&1&0 \end{pmatrix} & \pi'_{(\frac{1}{2},\frac{1}{2})}(K_x) &= \frac{1}{2}\begin{pmatrix} 0&1&1&0 \\ 1&0&0&1 \\ 1&0&0&1 \\ 0&1&1&0 \end{pmatrix} \\ \pi'_{(\frac{1}{2},\frac{1}{2})}(J_y) &= \frac{1}{2}\begin{pmatrix} 0&-1&-1&0 \\ 1&0&0&-1 \\ 1&0&0&-1 \\ 0&1&1&0 \end{pmatrix} & \pi'_{(\frac{1}{2},\frac{1}{2})}(K_y) &= \frac{i}{2}\begin{pmatrix} 0&1&-1&0 \\ -1&0&0&-1 \\ 1&0&0&1 \\ 0&1&-1&0 \end{pmatrix} \\ \pi'_{(\frac{1}{2},\frac{1}{2})}(J_z) &= \begin{pmatrix} 0&0&0&0 \\ 0&-i&0&0 \\ 0&0&i&0 \\ 0&0&0&0 \end{pmatrix} & \pi'_{(\frac{1}{2},\frac{1}{2})}(K_z) &= \begin{pmatrix} 1&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&-1 \end{pmatrix} \\ \end{align*} These are strange matrices, although we can make them look much more suggestive in another basis. Define the matrix \begin{equation*} U = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & i & 0\\ 0 & 1 & -i & 0 \\ 1 & 0 & 0 &-1 \end{pmatrix}. \end{equation*} Amazingly, \begin{equation*} U \big( \pi'_{(\frac{1}{2},\frac{1}{2})}(L_i) \big) U^{-1} = L_i \hspace{1cm}U \big( \pi'_{(\frac{1}{2},\frac{1}{2})}(K_i) \big) U^{-1} = K_i. \end{equation*} Therefore, the $(\tfrac{1}{2}, \tfrac{1}{2})$ representation is equivalent to the regular "vector" representation of $SO^+(1,3)$. However, these "vectors" live in $\mathbb{C}^4$, not $\mathbb{R}^4$, which people usually don't mention. Sorry this reply has come so long after your post. As this is a homework-type problem I won't perform all the calculations, but will spell out all the key points; it's probably simpler to follow this way anyway if you fill in the matrix algebra yourself! To understand what's going on here, first of all think about a spin-1 system. In that case the action of $J_3$ on vectors is represented by the matrix $$\left(\begin{array}{ccc} 0 & -i & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{array} \right)$$ If you follow the standard construction of the spin-1 representation using ladder operators found in any QM textbook, you find instead that $J_3$ is represented by the matrix $$\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right)$$ because it's acting on a basis of $J_3$ eigenstates. You can see that these matrices are unitarily equivalent by finding the eigenvalues of the first one to be $-1,0,1$; working out the corresponding eigenvectors gives you the matrix that changes basis (up to some ordering of that basis). (It's nice to notice that this matrix transforms linearly polarised transverse vector fields into circularly polarised ones!) Now, in the given case where we have $(1/2,1/2) = SU(2) \otimes SU(2)$. $J_3$ acts on both the (1/2,0) and (0,1/2) representations by $\frac{1}{2} \sigma_3$. Because it's a generator of the group, and not a group element, the action on the tensor product of the two groups is given by $$J_3 = \frac{1}{2} \sigma_3 \otimes I + I \otimes \frac{1}{2} \sigma_3 = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right)$$ It's clear that this matrix gives us the action of $J_3$ on its eigenstates, and a change of basis gives us the 4-vector representation. The difference is that we now have two 0 eigenvalues; these have resulted from "adding the angular momentum" of our two spin-1/2 representations. One is from the spin-1 rep as before. The other is from the spin-0 representation; this is your time direction in the 4-vector, which is a scalar under rotations. So in this case the components of the matrix that changes basis acts on the 0-eigenvalue directions are the Clebsch-Gordon coefficients. rwoldrwold You have misunderstood the strategy. The idea is that we define new operators $A_i = \frac{J_i + iK_i}{2}$ and $B_i =\frac{J_i-iK_i}{2} $. Now compute $[A_i,A_j], [B_i,B_j] $ and $[A_i,B_j]$ The answer you should get is that the first two should give you the lie algebra of $SU(2)$ while the second should give you zero. This means we have at least locally $SU(2)\times SU(2)$ The lorentz boosts for a two component spinor is then $e^{i\vec{\sigma }\cdot \theta + \vec{\sigma} \cdot \phi}$ and the other is $e^{i\vec{\sigma }\cdot \theta - \vec{\sigma} \cdot \phi}$. Direct sum of this does the general job i.e $\begin{pmatrix}e^{i\vec{\sigma }\cdot \theta + \vec{\sigma} \cdot \phi}& 0 \\ 0 &e^{i\vec{\sigma }\cdot \theta - \vec{\sigma} \cdot \phi} \end{pmatrix}$ So from this you should be able to get the generators you want AmaraAmara $\begingroup$ This is something that has been proven in the book. I think I need to explicitly calculate the generators of the Lorentz group by taking the tensor product of the two $SU(2)$ representations. How do I do this? $\endgroup$ – Watw Mar 26 '17 at 9:42 $\begingroup$ I have added what the lorentz boosts look like so it should be clear what the generators are $\endgroup$ – Amara Mar 26 '17 at 13:00 $\begingroup$ I believe you've actually misunderstood the question. Your answers relate to the Dirac spinor representation that's the direct sum (1/2, 0) +(0,1/2). The OP is asking for the equivalence of the (1/2, 1/2) representation to the fundamental 4-vector irrep. $\endgroup$ – rwold Apr 26 '17 at 11:43 Not the answer you're looking for? Browse other questions tagged group-theory group-representations representation-theory lie-algebra lorentz-symmetry or ask your own question. Tensor product of two different Pauli matrices $\sigma_2\otimes\eta_1 $ Lorentz Group Representations Explicit construction of $(\frac{1}{2}, \frac{1}{2})$ representation of Lorentz group Representation $(1/2,1/2)$ of Lorentz group Representations of the Lorentz Group Number of the Generators of Poincare Group Show the Lie algebra is the same for $SU(2) \times SU(2)$ and Lorentz group The Lie algebra of the Lorentz group is $su(2) \oplus su(2)$. Is there a similar relation for the algebra of the Poincare group? The derivation of the irreducible representations of the Lorentz group Can the finite dimensional irreducible $(j_+,j_-)$ representations of the Lorentz group $SO(3,1)$ be unitary? From irreducible representations of the Lorentz algebra to irreducible representations of the Lorentz group Representation of the Lorentz group from representation of its Lie algebra Solving the Lie algebra of generators: path from algebra to matrix representation
CommonCrawl
Business Economics Profit maximization The profit function of a firm for a given level of output 'x' is estimated by nonlinear... The profit function of a firm for a given level of output 'x' is estimated by nonlinear regression to be: {eq}P(x) = -(x - 375)^{2}+1,200 {/eq}. What is the production level at which profit is maximized? What is the maximum profit? Profit Function A profit function shows how much profit is made (P) for various levels of production where the quantity produced is (x). The production level is 375 and the corresponding profits are $1200. To maximize a function, in this case our profit function P, we need to use... Profit Maximization: Definition, Equation & Theory Learn the profit maximization definition, its importance, and explore the profit maximization theory. See how to calculate profit maximization with examples. If a firm has a production function f(k,l) = 3k^{0.3}l^{0.5} where r = 8, w = 9, and the price of output = 17, What is profit maximizing level of capital? For this question both the level of capital If output is only a function of labor, then: a. the slope of the production function measures the average product of labor. b. a firm cannot be profitable in the long run. c. the slope of a ray from the origin to a point on the production function measur A firm has the production function f(x, y) = x^1.40y^1. This firm has: a. decreasing returns to scale and diminishing marginal products for factor x. b. increasing returns to scale and decreasing marginal product of factor x. c. decreasing returns to scal A firm whose production function displays decreasing returns to scale will have an average cost curve that is: a. a curve with a positive slope b. a curve with a zero slope c. a curve with a negative slope d. all of the above. A firm's production is given by: q = 5L^{2/3} K^{1/3} (a) Calculate APL and MPL. Determine if the production function exhibits the law of diminishing marginal returns. Calculate the output (production) elasticity with respect to labor. (b) Calculate MRTS. Given the Production Function of a perfectly competitive firm, Q=120L+9L^2-0.5L^3, where Q=Output and L=labor input a. At what value of L will Diminishing Returns take effect? b. Calculate the range of values for labor over which stages I, II, and III occ Let the production function of a firm be given by Find the equation of the isoquant associated with a production level of 10 units of output. The profit function for two products are given by Z1 and Z2, respectively. Z 1 = Y 2 + 17 Y ? 42 Z 2 = Y 2 + 16 y ? 38 Z1=Y2+17Y?42Z2=Y2+16y?38 a. At what level of output, Y, will be the first produc If the slope of a long-run total cost function decreases as output increases, the firm's underlying production function exhibits: a. Constant returns to scale. b. Decreasing returns to scale. c. Decreasing returns to a factor input. d. Increasing returns What is the efficient output level to production synonymous with a profit maximizing level to output? Does the slope of the production function measure the average product of labour or the marginal product of a worker? For each production function below, determine: (1) Whether there are diminishing marginal returns to labor in the short run, and (2) the returns to scale. 1. f(L, K) = 2L + 3K 2. f(L, K) = 4LK 3. f(L, K) = min{L, K} A firm has the production function: q = 10L^{0.5}K^{0.5}, the price of labor is w = 10 and the price of capital is r = 20 a) demonstrate that this function has constant returns to scale. b) derive the short-run marginal and average variable cost functio When a firm produces a level of output on the production function: a. Marginal physical product is zero b. Maximum efficiency is achieved c. Opportunity cost for resources is at a maximum d. Profits are maximized A firm is producing 100 units of its product. At this level of output the AVC=$60, and the ATC=$80. The firm is a price taker and the price for its product is $100. Assuming that the firm is maximizing profits and that labor is the only variable input. Fr For a firm, the relationship between the quantity of inputs and quantity of output is called the: a. Profit function, b. Production function, c. Total-cost function, d. Quantity function. If the slope of the total cost curve increases as output increases, the production function is exhibiting: a. increasing returns to scale b. constant returns to scale c. decreasing returns to scale d. decreasing returns to a factor input Four profit-maximizing oligopolists produce electricity. They set their productions levels, y1, y2, y3 and y4, simultaneously. Each firm's cost function is given by C(y) =4y. The price of unit electri A production function establishes the relationship between: A. the quantity of inputs used and the quantity of output produced. B. the market price of a good and the sales revenue generated. C. the quantity of output produced and the firm's profit. D. the When a profit-maximizing firm undertakes production in more than one plant, it will allocate a fixed level of inputs such that _____ is equal across plants. A. Answers 'the marginal product' and 'the average product' are both correct. B. the average produ 1) Production Functions and Marginal Products: How do inputs turn into outputs? 2) Iso-quants/-costs and cost-minimization: What?s the best way to produce a given level of output? 3) Cost Curves and The difference in total surplus between a socially efficient level of production and a monopolist's level of production is: a. Offset by regulatory revenues, b. Called a deadweight loss, c. Usually small and insignificant, d. All of the above are correct. Given that the production function relating output to capital becomes flatter moving from the left to the right means that: A) The marginal product of capital is positive. B) There is diminishing marginal productivity of labor. C) There is diminishing mar A production function shows a firm how to: a. maximize profit b. maximize output c. minimize losses d. minimize output A firm is currently producing at a level where it's MC = 10 and its MR = 5. We can conclude that this firm is: a. Profit-maximizing, b. Under-producing, c. Over-producing, d. No definite conclusion can be made about the firm's level of production. Consider a profit-maximising, price-taking firm that only uses capital as input at a cost of v per unit. Its production function is given by f(k) = 11 - \frac{1}{1 + k}. What is its cost function (whe A firm has two variable factors and a production function f(L, K) = L1/4K1/2. The price of L is 3/4 and the price of factor K is 6. The price of its output is 12. What is the maximum profit the firm can make? a. 9 b. 12 c. 18 d. 73 e. None of the above If a firm has a production function f(k,l) = 2k^{(0.3)}l^{(0.6)} where r = 6, w = 4, and the price of output = 7, What the maximum profit the firm can earn? The production function for a firm is given by q = L^{.75} K^{.25} where q denotes output; Land K labor and capital inputs. (a) Determine marginal product of labor. Show whether or not the above production function exhibits diminishing marginal produ A firm has the production function q = f (L, K) = L + K2 This firm has: a) Decreasing returns to scale. b) Increasing returns to scale. c) Constant returns to scale. d) Increasing marginal product. e) None of the above. A production function may exhibit _____. a. constant returns to scale and diminishing marginal productivities. b. increasing returns to scale and diminishing marginal productivities. c. decreasing returns to scale and diminishing marginal productivities. Suppose a firm operates two production facilities within the U.S. Marginal costs of production for each plant are given as follows: MC_1=2q, MC_2=q_2. The firm's marginal revenue is given by MR=800-2q. Calculate the profit-maximization production distribu The production function is f(x_1, x_2) = x_1^{1/2}x_2^{1/2}. If the price of factor 1 is $4 and the price of factor 2 is $6, in what proportions should the firm use factors 1 and 2 if it wants to maximize profits? The production function: a. is an economic relationship between revenue and cost. b. always shows increasing marginal product of labor. c. shows the relationship between input prices and amount of input used. d. shows the maximum level of output for a giv A production function establishes the relationship between: a) the market price of a good and the sales revenue generated, b) the quantity of output produced and the firm's profit, c) the quantity of inputs used and the quantity of output produced, d) the a firm has a given production function: F(K,L)= K^1/2 L^1/3 a) set up firm's one step profit maximization in termsof output problem. b)? Check to see if the Inada conditions hold for the production. c Suppose a firm has a three-stage production function. Suppose the firm is using 20 units of labor. At this level of input, the marginal product of labor is 50 and the average product is 30. (i) What Consider the Production Function, Y = 25K1/3L2/3 (a) Calculate the marginal product of labor and capital (b) Does this production function exhibit constant/increasing/decreasing returns to scale? ( A firm has a total cost function of: TC(Q) = 100 + 100Q + Q^3/100, where 'Q' is the firm's output level. A) Find the function that gives the average cost of production. B) What output level minimizes A competitive firm's production function is f(x1, x2) = 12x1/21 + 4x1/22. The price of factor 1 is $1 and the price of factor 2 is $2. The price of output is $4. What is the profit-maximizing quantity of output? a. 304 b. 608 c. 300 d. 612 e. 292 A firm is producing 50 units of output and receiving $1.50 for each unit. At this output, marginal revenue is $l.25, average cost per unit is $l.20, and marginal cost is $l.40. From these revenue and A firm has the production function f(k, l) = 2k sqrt l. Let the price of capital be r = 1, the price labor be w = 2, and the price of output be p. Find the marginal products of capital and labor. Does the firm have constant returns to scale? Consider a firm, that has production function, f(L,K)=3L^2/3K^1/3. Does this production function satisfy the law of decreasing marginal returns of capital? The production function is a mathematical function that shows A. the relationship between output and the factors of production. B. how various inputs are produced. C. the most cost-efficient means of producing output. D. the most efficient level of output A firm's long-run average cost curve is estimated by the equation: LAC=1,000-2.5Q+0.005Q^2. What is the minimum efficient scale of production? A company has the following cost function: C(q) = 4q3 - 200q2 + 500q + 50,000. a. What level of output will minimize the average variable cost? b. Does the production process indicate diminishing marginal product? How can you tell? If output is produced with two factors of production and with increasing returns to scale, a) there cannot be a diminishing marginal rate of substitution. b) all inputs must have increasing marginal products. c) on a graph of production isoquants, moving A firm's production function is the relationship between: A. the inputs employed by the firm and the resulting costs of production. B. the firm's production costs and the amount of revenue it receives from the sale of its output. C. the factors of produ If a firm wants to maximize profits it should: a. Hire each factor of production up to the point at which the marginal physical product per the last dollar spent is equalized, b. Hire each factor of production up to the point at which the marginal facto The marginal product of labor in manufacturing slopes downward because of: A. diseconomies to scale B. discontinuities in the production function C. diminishing returns D. gross substitution with the food sector E. None of the above. Assume that the output per worker production function is: y_t = 2k_t^{0.5}. The saving and depreciation rates are estimated at 0.4 and 0.03, respectively. A. Calculate the steady-state capital-labor ratio for this economy. B. Calculate the steady-state Using the linear approximation system to estimate the profit maximizing price requires that managers have information on the cost of production: a. and the nature of the production function. b. and th Given the following table of production quantities, with their corresponding marginal revenue and marginal cost, estimate the production level that maximizes profit. In a two-input model, if marginal product is increasing for one input, does the production process necessarily have to increase returns to scale? Could it have decreasing returns to scale? The optimal amount of capital and labor used by a firm in the production process is usually obtained by the technique of cost minimization, rather than maximizing the level of output produced. a) Exp Suppose we are given a profit function Q = 12L^.5K^.5. The price of labor is $6 per unit and the price of capital (K) is $7 per unit. The firm is interested in the optimal mix of inputs to minimize the cost of producing any level of output Q. In the optim The optimal level of production for any company is the level of production that either maximizes profits or minimizes losses. How does one determine the optimal level of production for any business? Explain. A firm produces an output with a fixed proportion production given by: f(x_1, x_2) = min(x_1, x_2). Assume that w_1 = 2 and w_2 = 4. A) Find an expression for the firm's total cost function in the lon A firm's production function is given as Q=10L^{1/2}K^{1/2} where L and K are labour and capital. Firm's iso-cost function is C = wL + rK. (a) Using this production function, express the amount of labour employed by the firm as a function of the level of Consider a profit-maximizing firm with the following production function: f(x_1,x_2) = x_1 x_2. The prices of the two inputs are equal to w_1 = 4 and w_2 = 2, respectively. a) What are the returns-to This firm doesn't use capital (K). They only use labor (L). Suppose the firm's production function is Y = L^(x). Furthermore, r = rental rate of capital and w = wage. Find the profit function and solv A firm has production function Y = KL + L^2. The firm faces costs of $10 wages and $1 rental rate of capital. Find the cost function, average total cost, average viable cost, and marginal cost functions. In the short run, only labor is variable output. The marginal productivity of labor is _________. a. the slope of the total output curve at the relevant point. b. the negative of the slope of the total output curve at the relevant point. c. the slope of t If a production function is expressed in a linear form, the inputs used in the production process: a) Are perfect complements. b) Are perfect substitutes. c) Have to be increased in the same proportion. d) Have fixed marginal costs. e) Have equal marginal A firm is currently producing their profit maximizing quantity of 600 units of output using 150 hours of labor and 50 hours of capital. The marginal product of labor is 10 units of output per hour and Consider a production function of the form: q = L^.5 K^.6 Determine the elasticity of output with respect to labor and the elasticity of output with respect to capital. Show that marginal products Marginal physical product can tell a producer a. at what point to stop adding inputs to the production process. b. how much profit will be made at each level of production. c. how much the last input added to the total amount of revenue. d. how much t What is a production function? a. A relationship between a firm's profits and the technology it uses in its production processes. b. A relationship between the cost and revenue of a firm associated with each possible output level. c. A relationship betwee For levels of income to the right of the point where aggregate expenditures equal aggregate production in the AP/AE Model (Multiplier Model): A.inventories are decreasing. B.aggregate production exc The production function is a mathematical function that shows: A. how various inputs are produced. B. the most cost-efficient means of producing output. C. the relationship between output and the factors of production. D. the most efficient level of outpu The marginal product of labor is the A. output level above which the slope of the total product curve falls. B. output level above which the rate of total product per unit of labor falls. C. maximum output attainable with fixed factors when labor is the o Alpha Company is a competitive price-taker firm that is currently producing 100 units of output (Q=100). At the current level of production, the firm has Marginal Revenue of (MR=) $20, Average Variabl Consider a profit-maximizing firm that uses labor, L, as an input to produce its output, Q, according to the production function Q = L^1/2. Labor is paid an hourly wage w. The firm's total revenue is An economist estimated that the cost function of single-product firm is C(Q) = 100 + 20Q + 15Q^2 + 10Q^3, where Q is the quantity of output. Calculate the variable cost of producing 10 units of output. The production function q=100k^0.4L^0.8 exhibits: a. increasing returns to scale but diminishing marginal products for both k and l. b. decreasing returns to scale and diminishing marginal products for both k and l. c. increasing returns to scale but dim The production function relates a. inputs to outputs. b. cost to input c. cost to output d. wages to profits. You estimate a short-run production function to be Q = 16L^{0.8} which gives a Marginal Product of Labor function MPL = 0.8*16L^{-0.2} = 12.8/L^{0.2}. If the product made by the labor sells at a price of P=$8 per unit, then the Marginal Revenue Product of For the production function fill in the following table and state how much the firm should produce so that: (a) Average product is maximized (b) Marginal product is maximized (c) Total product is maximized (d) Average product is zero Given the described situation, write down NS's profit maximization problem. What is its optimal level of production as a function of FS's production decision? According to the law of diminishing returns, over some range of output: a. Every production function exhibits diminishing returns to scale. b. Total product will decrease as the quantity of variable input employed increases. c. Marginal product will event Given the production function f (xl, x2) = min{x1, x2), calculate the profit-maximizing demand and supply functions, and the profit function. What restriction must it satisfy? The economically efficient input combination for producing a given level of output a. minimizes the average cost of producing the given level of output. b. occurs at the maximum value of the total product curve. c. can produce that level of output at the The marginal revenue product is: a. an increase in the profit of a firm with an increase in the output by one unit. b. the value that all the unskilled workers contribute to a firm. c. the value th The original revenue function for the microchip producer is R = 170Q - 20^Q. Find the output level at which revenue is maximized. Graphically derive a marginal product of labor function from a total product function for a firm that experiences an increasing rate of returns (faster than proportional) up to an output of 60 units o The marginal revenue product is: a) the value of all the final goods and services produced by a firm. b) the value that a worker contributes to a firm. c) an increase in the profit of a firm with an increase in the output by one unit. d) the output pe A firm is producing 10 units of output: marginal cost is $24 and average total cost is $6 at this level of output. The average total cost at 9 units of output is: (blank). A firm is currently producing 10 units of output; marginal cost is $24 and average total cost is $6 at this level of output. The average total cost at 9 units of output is: a) $4 b) $5 c) $6 d) $8 e) none of the above A perfectly competitive firm is producing 700 units of output in a market where the price is $50 per unit. At this output, TC= $40,000 and TVC= $30,000. The firm is currently producing a level of output where MC is $20 per unit. This output level maximize A firm has the production function f(x, y) = 60x4/5y1/5. The slope of the firm's isoquant at the point (x, y) = (40, 80) is (pick the closest one): a. -0.50 b. -4 c. -0.25 d. -8 The profit-maximizing level of production _____. An economist estimated that the cost function of single-product firm is C(Q) = 100 + 20Q + 15Q^2 + 10Q^3, where Q is the quantity of output. Calculate the average fixed cost of producing 10 units of output. Capricom Products Limited production function is lnQ = 0.75nK + 0.75lnL. Given that the price of labor (L) is $20 and the price of capital (K) is $40: (i) What is the optimal mix? (ii) What is the firm's output elasticity and return is to scale. Explain. This question will walk you through finding the profit-maximizing level of output for a firm with Cobb-Douglas production. Suppose the firm's production function for output y is given by The firm is i Suppose we are given a profit function Q = 12L0.5K0.5. The price of labor (L) is $6 per unit and the price of capital (K) is $6 per unit. The firm is interested in the optimal mix of inputs to minimize the cost of producing any level of output Q. In the o A multi-product firm's cost function was recently estimated as C(Q1, Q2) = 75 - 0.25Q1Q2 + 0.1Q1^2 + 0.2Q2^2 a. Are there economies of scope in producing 10 units of product 1 and 10 units of produc If the production function for a certain good exhibits constant returns to scale, does this mean that the law of diminishing marginal returns does not apply? If the marginal product of an input is positive, but decreasing as more and more of the input is employed, then: a) the firm must be operating in the long run. b) average product must be declining. c) the firm should produce less output. d) total product If a firm is producing a level of output such that P < ATC, it may be concluded that: a. it will incur a loss b. its profits will be zero c. it will incur a profit d. it should shut down and only p
CommonCrawl
The effects of coating culture dishes with collagen on fibroblast cell shape and swirling pattern formation Kei Hashimoto1,2,3 na1, Kimiko Yamashita1,2,4,5,6 na1, Kanako Enoyoshi1,2, Xavier Dahan2,7, Tatsu Takeuchi8, Hiroshi Kori ORCID: orcid.org/0000-0003-2899-78961,9 & Mari Gotoh3 Journal of Biological Physics volume 46, pages 351–369 (2020)Cite this article Motile human-skin fibroblasts form macroscopic swirling patterns when grown to confluence on a culture dish. In this paper, we investigate the effect of coating the culture-dish surface with collagen on the resulting pattern, using human-skin fibroblast NB1RGB cells as the model system. The presence of the collagen coating is expected to enhance the adherence of the fibroblasts to the dish surface, and thereby also enhance the traction that the fibroblasts have as they move. We find that, contrary to our initial expectation, the coating does not significantly affect the motility of the fibroblasts. Their eventual number density at confluence is also unaffected. However, the coherence length of cell orientation in the swirling pattern is diminished. We also find that the fibroblasts cultured in collagen-coated dishes are rounder in shape and shorter in perimeter, compared with those cultured in uncoated polystyrene or glass culture dishes. We hypothesise that the rounder cell-shape which weakens the cell–cell nematic contact interaction is responsible for the change in coherence length. A simple mathematical model of the migrating fibroblasts is constructed, which demonstrates that constant motility with weaker nematic interaction strength does indeed lead to the shortening of the coherence length. Collective cell migration is a key process observed at various stages in the development of multi-cellular organisms, starting with gastrulation and continuing into organogenesis [1]. Well-studied examples include neural-tube closure of vertebrae and lateral-line formation in zebrafish. After birth, it is involved in wound healing and cancer metastasis [2]. Collective cell migration is also observed in single-cell organisms. A well-known example is aggregation by which Dictyostelium cells form a slug-like structure when starved [3]. Deciphering the mechanisms that drive robust and precise collective migration of a large number of cells is of vital importance in understanding development, differentiation, and evolution, with many possible applications in cancer therapies, regenerative medicine, and tissue engineering [2, 4,5,6]. In vitro cultivation studies can provide important insights into collective cell migration. When cultivated densely, complex alignment patterns are known to spontaneously form in several types of cells. The characteristic coherence length of the resulting alignment pattern is of special interest since it provides a measure of the number of cells that can migrate collectively. One of the key factors determining the coherence length is the strength of the cell–cell contact interaction. Although a migrating cell has vectorial polarity, moving toward a specific heading, the alignment of these cells is often nematic, that is, neighbouring cells tend to migrate either in parallel or anti-parallel directions. This nematic alignment can be observed in several cellular-scale objects including gliding microtubules, actin filaments, bacteria, and cultured cells [7,8,9,10,11,12,13,14]. Nematic alignment is also observed in purely mechanical systems whose members interact via the excluded-volume effect, such as in a population of rod-like objects [15]. In this case, the strength of the nematic interaction is determined by the shape of the rods; the interaction being stronger between longer rods. In the present paper, we investigate the alignment pattern of human-skin fibroblast NB1RGB cells grown to confluence in a culture dish. Skin fibroblasts are cells that provide structural support for the skin and are easily cultivated in vitro. They represent a convenient model system for studying the collective migration of cells. Here, we examine how the alignment pattern of the fibroblasts depends on whether or not the dish surface is coated with collagen. To check for possible dependence of the result on the substrate material, the experiment is performed on two types of dishes: polystyrene and glass. Since fibroblasts adhere to collagen via integrins, a natural expectation would be that the collagen-coated surface would provide enhanced traction to the fibroblasts leading to their enhanced motility, which in turn would lead to a change in the alignment pattern. In vitro, the collagenous extracellular matrix (ECM) is known to stimulate skin-fibroblast motility [16], which adds support to this expectation. As reported in [12], when fibroblasts are seeded onto a culture dish at low density, the cells adhere to the dish individually and then migrate randomly into cell-free areas while only occasionally coming into contact with other cells. As they proliferate, their density increases and confluence is eventually achieved. Confluent fibroblasts align locally along their elongated axes and form macroscopic swirling patterns (see Supplementary Videos 1 and 2). A similar swirling pattern, the storiform pattern, is often observed in fibrohistiocytic lesions in vivo [17, 18]. We find that for both polystyrene and glass dishes, the characteristic coherence length of the swirling pattern decreases as the density of the collagen coated onto the dish surface is increased. Moreover, we observe that the cells become rounder in shape with the increase in coated-collagen density, whereas cell-number density and, unexpectedly, cell motility remain unchanged. From these experimental results, we hypothesise that the difference in the coherence length mainly follows from the difference in the strength of the nematic contact interaction between the cells; i.e., rounder cells experience weaker nematic interactions, and thereby the coherence length becomes shorter. To test the feasibility of this hypothesis, we construct a simple mathematical model of migrating cells in which the strength of the nematic interaction can be controlled by a single parameter. Numerical simulations of this model demonstrate that the coherence length does indeed correlate positively with the nematic interaction strength. We thus propose that the collagen coating first leads to the change in the fibroblast cell shape, which in turn shortens the coherence length. Possible mechanisms by which the collagen coating leads to the rounding of the fibroblasts are also discussed. Human-skin fibroblast NB1RGB cells form macroscopic swirling patterns Human-skin fibroblast NB1RGB cells form macroscopic swirling patterns when cultivated in a culture dish due to their elongated shape, motility, and proliferation within the confined two-dimensional surface. Our objective is to quantify the difference in the swirling patterns when the fibroblasts are cultured on uncoated and collagen-type-I-coated dishes. As discussed above, we perform the experiment on both polystyrene- and glass-bottom dishes. The collagen-coating procedure is detailed in Sect. 4. The resulting collagen density on the surface of the culture dish depends on the concentration of collagen type I in the initial collagen solution used in the coating process. We use the coating obtained from a 10.0-μg/mL collagen type-I solution as the standard reference. Typical results for uncoated and collagen-coated dishes are shown in Fig. 1. See also Supplementary Videos 1 (uncoated) and 2 (collagen-coated). In the first row of Fig. 1a, still images of the confluent fibroblast swirling patterns on uncoated (left) and collagen-coated (right) polystyrene dishes are shown. The black scale bar in the lower-right corner of the right image is 1 mm long. For ease of comparison, the orientations of the individual fibroblasts in these images are read using Orientation J software [19] and shown colour coded in the second row of Fig. 1a, with light-blue and red respectively indicating horizontal and vertical orientations (see Sect. 4.4 in Sect. 4 for details). Other colours indicate orientations in between, as shown on the scale on the right margin of Fig. 1b. Patterns formed by human-skin NB1RGB fibroblast cells cultured in a polystyrene dish. a Typical patterns of confluent fibroblasts cultured for 144 h on the uncoated control (left) and the dish coated from the 10.0 μg/mL collagen type-I solution (right). The black scale bar in the lower-right corner of the right-monochrome image is 1 mm long. The smaller images in the right margin are dishes coated from 0.1 μg/mL (top) and 1.0 μg/mL (bottom) collagen type-I solutions. For each pair of images, the lower colour-map encodes the orientations of the fibroblasts in the upper bright-field image in accordance with the colour scale shown to the right of (b). b The block-averaged orientations of the fibroblasts on uncoated (left) and 10.0 μg/mL collagen-coated (right) dishes shown in (a). c, d Correlation functions of cell orientation for the uncoated (c) and collagen-coated (d) cases. The graphs c' and d' in the sub-window show the behaviour of the two functions near the origin on the same axes for the ease of comparison. e d50 values averaged over 40 images. Data represent the mean ± standard error of mean (SEM) from four independent cultures. Ten images were captured from each culture dish. ***p < 0.001 under Student's one-tailed t test when compared with the control Comparing the colour-coded images by inspection, one discerns that the fibroblasts form into patches of cells with similar orientation, and that these patches are slightly larger for the uncoated dish compared with the collagen-coated dish. We quantify this observation by extracting the correlations of cell orientations from the images following the procedure detailed in Sect. 4. First, the 1600 × 1200 pixel image is divided into a 50 × 38 grid, each subdivision being 32 × 32 pixels in size. Then, the block-averaged orientation is calculated for each subdivision, the results of which are shown in Fig. 1b. The correlations between the block-averaged orientations for each separation of the blocks is then calculated. The resulting correlation functions are plotted in Fig. 1c (uncoated polystyrene) and d (collagen-coated polystyrene) for the images shown in Fig. 1a. A blow-up of the graphs between 0 and 1 mm is shown in the subframe inside Fig. 1d (c' and d'). We can see that the orientation correlation falls off more quickly for the collagen-coated dish compared with the uncoated dish. This difference is consistently reproduced for multiple dishes, over multiple repetitions of the experiment, for both dish materials. To characterise the fall-off of the correlation function with distance, we define the length d50 to be the distance in which the correlation is reduced to one half (50%) of the maximum value (100%). How this length is determined is illustrated in Fig. 1c, d. The values of d50 are calculated following this procedure for multiple images and averaged. Forty-image averages of d50 for polystyrene are plotted in Fig. 1e for the uncoated, and three collagen-coated cases, where initial collagen solutions of concentrations 0.1, 1.0, and 10.0 μg/mL are respectively used. Figure S3B in the Supplementary information compares four-image averages of d50 for uncoated and coated from-10.0 μg/mL collagen glass dishes. A clear trend can be seen in Fig. 1e in which the orientation coherence length decreases with the initial collagen concentration, indicating that the cell orientations vary more rapidly with distance when the collagen coatings on the culture dishes are denser, resulting in the cells forming more swirling patterns overall. Figure S3B, though having only two data points, confirms this trend. The culture dishes in this study are coated with collagen molecules dissolved in acidic conditions and not with collagen fibrils. To test whether the structure of the collagen coating affects the coherence length, we culture the cells in dishes coated with various types of collagen, all starting from an acidic (pH 3.0) solution of concentration 10.0 μg/mL. These are: Collagen type I (the reference) Heat-denatured collagen type I Gelatin, which comprise thermal denatured collagen fibrils, Collagen type IV, which is a non-fibrillar form of collagen. In all of these cases, the values of d50 are decreased compared with those of the control (uncoated). See Fig. S2A–F in the Supplementary information. This indicates that the reduction of orientation coherence occurs independently of the structure of collagen. Collagen coating affects cell shape, but not single-cell motility or cell-number density To clarify the effects of the collagen coating on individual fibroblast cells, we analyse cell motility and cell-number density, both of which are likely to affect the alignment pattern [20, 21]. To investigate cell motility, we track the movements of individual cells cultured at low and high densities (Fig. 2a). This measurement is possible for glass dishes only due to limitations of our equipment. At low density, the instantaneous velocity is slightly smaller for the collagen-coated dish compared with the uncoated dish but not by a significant amount. At high cell density, the instantaneous velocities are indistinguishable between the uncoated and collagen-coated cases. A similar tendency is observed in the 10 h-average velocity (Fig. S3D, E), suggesting that directional persistence in the cell motility has little dependence, if any, on the dish coating. Moreover, we measure the number of cells in the high-density cultures, for both dish materials, and find no significant difference in the numbers of cells (Fig. 2b; Fig. and S3C). Effect of collagen coating on the motility (a), number (b), and morphology (e, f) of human-skin fibroblast cells. Samples are compared with Student's t test and labelled n.s. (no significance); *p < 0.05; **p < 0.01; ***p < 0.001. a Migration velocities of the fibroblasts cultured at low and high densities on uncoated and 10.0-μg/mL collagen-type-I-coated glass dishes. Each data point represents the mean ± SEM of 20 cells chosen randomly from two cell cultures. b Number of fibroblasts cultured for 24–144 h on uncoated and 10-μg/mL collagen-type-I-coated polystyrene dishes. Data points represent the mean ± SEM of four dishes. c–f Analysis of fibroblast cell morphology. c, d Fibroblasts cultured on uncoated and 10-μg/mL collagen-type-I-coated polystyrene dishes at 24 h (c; low density) and 72 h after culture start (d; high density). Scale bar, 100 μm. The fibroblasts were observed and photographed with a Nikon ECLIPSE TS 100 phase-contrast microscope (NIKON Corp., Tokyo, Japan). e, f Circularity and area of fibroblasts cultured on uncoated and 0.1-, 1.0-, and 10.0-μg/mL collagen-type-I-coated polystyrene dishes at low density. Each data point represents the mean ± SEM of 20 cells chosen randomly from two cell cultures We also analyse the effects of the collagen coating on the fibroblast cell shape. We measure the area S and perimeter L of each cell on uncoated and 0.1, 1.0, and 10.0 μg/mL collagen type-I-coated polystyrene dishes at low density (Fig. 2c, f), and quantify the cell roundness in terms of the circularity 4πS/L2 (Fig. 2e). The comparison between uncoated and 10.0-μg/mL collagen type-I-coated glass dishes is shown in Fig. S3F, G in the Supplementary information. These measurements indicate that the NB1RGB fibroblasts at low cell density become rounder and smaller as the density of the collagen on the coating is increased, for both polystyrene and glass dishes. The same tendency is observed for heat-denatured collagen type I, gelatin, and collagen type IV on polystyrene (Fig. S2G, H), which suggests that interactions with the collagen coating make the shape of NB1RGB fibroblasts rounder and smaller. At high cell density, it is difficult to quantify the cell shape since the cell boundaries become difficult to discern (Fig. 2d). Regarding cell area, there should be no significant difference between the uncoated and 10.0 μg/mL collagen type-I-coated dishes at high cell density given that the final cell densities are indistinguishable. Simulation: interaction strength increases coherence length Among our observations, the only significant difference between the cells cultured in coated and uncoated dishes is in their shapes at low density. Although it is possible that the difference in the cell shape is not maintained at high density, the shape of isolated single cells is likely to affect the orientation dynamics in highly dense cell populations. Motivated by previous studies on the collective dynamics of rod-like objects [15], we consider the hypothesis that the nematic interactions between the cells in collagen-coated dishes are weaker than those in the uncoated dishes due to the cells becoming rounder in the presence of collagen, and it is this weaker nematic interaction that leads to the shorter coherence length. The viability of this hypothesis depends on whether varying the strength of nematic interactions alone is sufficient in changing the coherence length. To this end, we construct a simple mathematical model with just a few tunable parameters, as described in Sect. 4. The model includes effects of cell motility, cell proliferation, excluded volume effect, and nematic interactions. We run numerical simulations of our model to assess the effect of different nematic interaction strengths on the cell alignment patterns. Despite its simplicity, our model reproduces the experimentally observed dynamical behaviour well (see Supplementary Videos 3 and 4). Figure 3 shows some typical numerical results, where the parameter K (in units of hour−1) quantifies the strength of the nematic interaction. Figure 3a shows the alignment pattern obtained for K = 0.020/h, after running the simulation from a random initial condition for an equivalent of 144 h. Figure 3c shows its block-averaged cell orientations, and Fig. 3e the corresponding distance dependence of the orientation correlation. Figure 3b, d, and f shows those for K = 0.015/h. Simulation results. The strength of the nematic interaction is K = 0.020/h in (a), (c), and (e) and K = 0.015/h in (b), (d), and (f). a, b Cell positions and orientations. c, d Averaged orientations for the 32 × 32 pixel subdivisions. e, f Typical correlation functions of the averaged cell orientations and the determination of d50. The subwindow-labelled e'f' in (f) shows the same graphs together on the same axes for the ease of comparison. g Dependence of d50 on the interaction strength K for three different choices of the cell migration speed v. Dashed line: v = 0.1864 unit/h (uncoated measured value), dotted line: v = 0.1598 unit/h (10-μg/mL-collagen-type-I-coated measured value), solid line: v = 0.1731 unit/h (average of the two measurements). The values of d50 are averaged over ten simulations for each value of K. For the v = 0.1731 unit/h (average) case; error bars are shown for each point. h Time courses of d50 for K = 0.020/h and K = 0.015/h As shown in Fig. 3g, an increase in the nematic interaction strength K, with all other model parameters kept fixed, results in an increase in the coherence length. Figure 3g also shows how the K-dependence of d50 changes with the cell migration velocity v. The three values of v chosen are the experimentally observed values at low cell density with and without collagen coating (Fig. 2a) and their average. There is little variation, demonstrating that our conclusion is independent of v. We also note that the observed values of d50 develop rather late in the simulation as shown in Fig. 3h. To gain deeper understanding of the pattern formation process, we perform further numerical simulations under various conditions. The results are shown in Fig. 4 and Supplementary Videos 5–7. In all of the cases except for K = 0, we confirm the formation of swirling patterns and the increase of d50 values with the strength of nematic interaction. Simulation results. a d50 vs. K at t = 144 h and b d50 vs. t for K = 0.02/h with the following conditions: (i) "control", which is the same as the result shown in Fig. 3 with v = 0.1864 units/h, (ii) "zero speed", where v = 0, (iii) "random walk", where the migration direction of each cell at each time step is set to be a random value irrespective of its orientation, (iv) "from high density", where the cell proliferation is absent and the initial number of cells is 14,080, (v) "random orientation", where the orientation of the newborn cell is assigned a random value, (vi) "zero speed, from high density, no excluded volume effect", where we employ (ii) and (iv) and further assume KC = 0. c d50 vs. K at t = 144 h with noise strength μ = 0.1/h. d d50 vs. μ at t = 144 h for K = 0.2/h Recall that in Fig. 3h, d50 values grow rapidly at late times. In contrast, in the "from high density" case in Fig. 4b, d50 grows from earlier times. These observations suggest that the growth of the coherence length starts after the cell density becomes sufficiently high. To check the robustness of our results, we further perform additional numerical simulations in which noise is included in the angular dynamics (Fig. 4c, d and Supplemental movies 6 and 7). We introduce white noise of the strength μ (with units hour−1) in the angular dynamics of each cell. The correlation time of angular dynamics of single isolated cells is approximately given by 1/μ, i.e., the direction of a cell at time t is approximately independent of that at time t − 1/μ. We choose a reference value of μ to be 0.1/h in our simulations. As seen in the movies, swirling patterns develop even in the presence of noise though the strength of nematic interaction required is larger compared with the noiseless case. In Fig. 4c, it is observed that substantial coherence emerges when the coupling strength K is comparable with or larger than the noise strength μ = 0.1/h, and the coherence length increases with the coupling strength K. Conversely, Fig. 4d shows the dependence of d50 on the noise strength μ with K fixed to 0.20/h, and the value of d50 can be seen to increase as the noise strength μ is lowered. As there is no discernible experimental difference in the directional persistency between coated and uncoated dishes (Fig. S3d, e), the difference in actual noise level must be small, and it is unlikely such a difference would account for the observed difference between the d50 values in the fibroblast cultures. Our simple mathematical model thus demonstrates that the effect of collagen coating on the orientation patterns can be accounted for solely by collagen's effect on the strength of the nematic interactions between the fibroblast cells. Human-skin fibroblasts cultured at high density form macroscopic swirling patterns in the culture dishes. This study reveals that the cells become rounder and form less coherent patterns as the density of the collagen coated on the culture dishes is increased. There are clear differences in the morphological properties between cells cultured in the uncoated and coated dishes; cells cultured in the coated dishes are shorter in perimeter and rounder (Fig. 2c, e, and f). The following molecular mechanism could underlie this change. It is known that the elongated cells have developed stress fibres; conversely, lamellipodia, which are formed by actin projections on the edge of the cell, rather than a stress fibre, are activated to make the cells round [22]. Two members of the Rho GTPase family, RhoA and Rac1, are necessary for the regulation of various cellular behaviours, including microfilament network organisation. The formation of stress fibre and lamellipodia are induced by RhoA and Rac1, respectively [23,24,25,26,27]. RhoA and Rac1 inhibit each other's activation, and this competition between the two is one of the factors that regulate cell morphogenesis [23, 28]. It has been reported that collagen type I increases the activity of Rac1 in human platelets [29]. In fibroblasts, it is unknown whether collagen controls the Rac1 activity, whereas the inhibition of Rac1 activation promotes the expression of collagen protein, suggesting that collagen interacts with Rac1 in fibroblasts [30]. Therefore, it is possible that the collagen coating induces the development of lamellipodia and decreases that of stress fibres via Rac1 activation, thus changing the NB1RGB cell roundness. Collagen is also known to control fibroblast motility. Li et al. [16] report that collagen coating promotes cell migration into cell-free space in scratch assays. This result seemingly differs from our results shown in Fig. 2a, wherein no significant difference was found between the migration speeds in the uncoated and coated dishes. However, they do not necessarily contradict each other since Li et al. [16] studied a different scenario: it analysed the migration of the cell assembly from densely populated to scratched regions. Moreover, the migration speed of individual cells may be sensitive to its measurement method, such as the discretisation of cell trajectories and the observation time interval. Depending on the method, a significant difference between the uncoated and coated dishes could arise. However, our numerical simulations reveal that the coherence length is insensitive to small variations in migration velocity (Fig. 3g), suggesting that the difference in migration velocities at low density, if any actually exists, does not have a substantial effect on pattern formation at high density. As the collagen density is increased, the cells become rounder, whereas cell motility remains unchanged, as shown in Figs. 1e and 2e. The same tendency is found for different types of coating materials (collagen type I, heat-denatured collagen type I, gelatin, and collagen type IV), as shown in Fig. S2G. We thus hypothesise that the change in coherence length results from a change in the cell–cell contact interactions mediated by a change in cell shape. However, we do not have any direct evidence of this, and it therefore remains an open issue. Use of genetic manipulation or inhibitors that exclusively control the cell shape would be required for such a study. Concerning the mathematical models of fibroblast orientation proposed in the literature, some focus on the interplay between the cells and the extra-cellular matrix (ECM) [31, 32], known as dynamic reciprocity. Fibroblast orientation has also been modelled in terms of individual cell migration [33]. Systems of reaction-diffusion and integro-partial differential equations have also been used to model fibroblast orientation; however, these require large numbers of parameters and computational complexity that would unnecessarily complicate the isolation of the key target parameter. Instead, our goal in using mathematical modelling is to test the hypothesis that changes in the coherence length of the orientation patterns can be driven solely by changes in the strength of cell-cell interactions. We therefore consider a model in which the strength of the nematic interaction can be controlled by a single parameter. In addition, to keep the model as simple as possible, we regarded each cell as a self-driven particle with constant speed, following various theoretical studies on collective migration [34, 35]. Despite the simplicity of our model, it qualitatively reproduces our experimental findings (Fig. 3) and represented rather realistic dynamics of collective migration (Supplementary Videos 3 and 4). Our model thus demonstrates that the coherence length increases with the strength of the cell–cell interaction, lending support to our hypothesis. Using our mathematical model, we perform various in silico experiments (Fig. 4; Supplementary Videos 5 and 6). Similar swirling patterns are observed in all the conditions we employ except for the case of K = 0. In particular, qualitatively the same results are obtained even when spontaneous mobility, reproduction, and excluded volume effect are all turned off. Therefore, we propose that the nematic interaction is the primal factor for the alignment process and the formation of swirling patterns in cell cultures. When only the nematic interaction is considered, the system is essentially the same as a population of identical oscillators distributed in a two-dimensional space with a synchronisation interaction between close neighbours. The synchronisation process of such a system can be described by the model \( \frac{\mathrm{d}{\theta}_i}{\mathrm{d}t}=\omega +\hat{K}\sum \limits_{j=\left\langle i\right\rangle}\sin \kern0.3em m\left({\theta}_j-{\theta}_i\right) \) [36], where ω denotes the natural frequency of each oscillator, \( \hat{K} \) denotes the interaction strength, and j = 〈i〉 indicates a sum over the nearest neighbours of the i-th oscillator. The parameter m is introduced for convenience to toggle between the two cases of ferromagnetic (m = 1) and nematic (m = 2) interactions. The latter corresponds to our situation, whereas the former is usually considered for a system of interacting oscillators. We may set ω = 0 without loss of generality as it corresponds to the change θi → θi − ωt. The parameter m may also be set to unity without loss of generality as it corresponds to the changes \( {m\theta}_i\to {\theta}_i,m\upomega \to \upomega, \kern0.5em m\hat{K}\to \hat{K}. \) Thus, let us assume ω = 0 and m = 1. Then, it is clear that \( \hat{K} \) determines only the time scale of the process; i.e., the process becomes faster for larger \( \hat{K} \) without any other changes. Moreover, the system can be given a variational form as \( \frac{\mathrm{d}{\theta}_i}{\mathrm{d}t}=-\hat{K}\frac{\partial }{\partial {\theta}_i}G \), where \( G=-\frac{1}{2}\sum \limits_{i,j=\left\langle i\right\rangle}\cos \left({\theta}_j-{\theta}_i\right) \). Thus, the system evolves with time toward a minimum of G. The global minimum of G corresponds to θj = θi for all coupled pairs, suggesting that perfect alignment should eventually be achieved. When a random initial condition for the θi values is employed, many topological defects typically arise. The coherence length increases with time as the number of defects decrease by annihilation, as observed in various systems (see ref. [37] and references therein). Thus, with a given observation time, a larger coherence length is obtained for larger \( \hat{K} \). We suppose that this speed-up effect is a primal mechanism underlying the increase of coherence length with the nematic coupling strength, denoted by K in our model of motile cells. These observations are in line with ref. [12], where the author observes pattern formation in cultures of fibroblasts with elongated cell shapes and finds that the number of swirling patches progressively decreases after confluence and a single parallel array is eventually obtained. Correlation analysis reveals that the patterns formed under the conditions of this experiment are on the order of several cell-lengths long; however, larger cell assemblies may migrate collectively in vivo [38]. Possible factors that hamper larger-scale coherence in vitro include noisy single-cell migration, cell division, and topological defects [10]. To realise the large-scale collective migration in vivo, several other factors may play substantial or complementary roles, such as cell adhesion, spontaneous role assignment among cells, and polarity alignments among adjacent cells [38,39,40]. Mechanical tension applied to the tissue would also contribute to collective migration [41, 42]. Our experimental and numerical results support our hypothesis that cell shape affects large-scale coherence by mediating the strength of cell-cell contact interactions. Although our finding is based on an in vitro system, such a mechanism may be at work in vivo as well. Thus, our study suggests that cell shape may play an essential role in cell-cell communication in single and multi-cellular organisms. Coating of culture dish Collagen type I, collagen type IV, and gelatin were purchased from Nitta Gelatin Inc. (Osaka, Japan. Product names: Cellmatrix Type IA, Cellmatrix Type IV, and GLS250). Collagen types I and IV are provided as acidic solutions of pH 3 and concentration 3 mg/mL. These were diluted with a 1 mM HCl solution (pH 3.0) to the desired concentrations. Gelatin was dissolved in the 1 mM HCl solution to the desired concentration. Maintaining the acidic condition prevents the formation of collagen fibrils from the collagen molecules. The heat-denatured collagen type-I solution was prepared by heating the 10.0 μg/ml Cellmatrix Type-IA solution for 30 min at 60 °C. The polystyrene culture dishes used in this experiment were 100 mm in diameter and obtained from AS ONE (Osaka, Japan). The glass culture dishes were 35 mm in diameter and obtained from Iwaki Co., Ltd. (Tokyo, Japan). The culture dishes were incubated with the various types of collagen solutions, or just the vehicle solution (1 mM HCl) for the uncoated control, overnight at 4 °C. Subsequently, the dishes were air dried and washed four times in phosphate-buffered saline (PBS). The amount of collagen that was attached to the dish surface was estimated as follows. Following the same procedure as the culture dishes, each well of a 96-well polystyrene plate was incubated with 50 μL of the collagen or vehicle solution overnight at 4 °C, then air dried and washed four times in PBS. The amount of collagen attached to the well surface was then measured with the Collagen Quantitation Kit (Cosmo Bio, Tokyo, Japan) following the manufacturer's protocol. By comparing the amount of coated collagen prepared with 1 and 10 μg/mL collagen solutions, we confirmed that the collagen coated on the wells increased approximately in proportion to the dosage (Fig. S1). Note that the detection limit of the Quantitation Kit was 0.4 μg/mL, hence the amount of collagen in the well prepared with the 0.1 μg/mL solution was below the detection limit. Normal human-skin fibroblasts, RIKEN original (NB1RGB), were provided by the RIKEN BioResource Research Center through the National BioResource Project of MEXT, Japan. The cells were cultured in minimum essential medium alpha (MEMα; Life technologies, Carlsbad, CA) supplemented with 10% fetal bovine serum (FBS; Biowest, Nuaille, France) at 37 °C in a humidified incubator with a 5% CO2 atmosphere. The 100-mm diameter polystyrene dishes were seeded with 5.0 × 105 NB1RGB cells, and incubated for up to 144 h (6 days). The exception was the experiment reported in Fig. S2, which started out with 1.0 × 106 cells and incubated for 72 h (3 days), the larger initial cell-count leading to an earlier attainment of confluence. The 35-mm diameter glass dishes were seeded with 6.0 × 104 NB1RGB cells so that the initial cell-density will be the same as the 100-mm diameter dish with 5.0 × 105 cells. These were incubated for up to 90 h. After incubation, the dishes were washed in ice-cold PBS and the cells fixed in ice-cold 100% methanol. Cell number measurement For the cell-number density measurements reported in Fig. 2b and Fig. S3C, the cells were dissociated from the dishes with 0.025% Trypsin-EDTA, and the number of cells counted using an automated cell counter (BACKMAN COULTER, Brea, CA). Orientation analysis For cell-orientation analyses reported in Fig. 1 and Figs. S2 and S3, images of the swirling patterns were captured at × 50 magnification with a digital microscope (VH-Z20W, KEYENCE, Osaka, Japan). Each image had 1600 × 1200 pixels, corresponding to an area of 6960 × 5220 μm2. Since this is quite small compared with the total area of the 100-mm diameter polystyrene dish, images of ten different non-overlapping fields of the dish were collected from each. For the 35-mm diameter glass dishes, with a 12% area compared with the 100-mm dish, one image was taken from each. These images were analysed with the ImageJ plugin OrientationJ [19] to generate the colour-coded maps shown in Fig. 1a and Figs. S2A–E and S3A. OrientationJ determines the local orientation θ of an image as follows. The 2D monochrome image is essentially an intensity function f(x, y) defined for every pixel (x, y) of the frame. OrientationJ overlays a Gaussian window w(x − x0, y − y0) on the field and computes the structure tensor matrix $$ {J}_{ij}\left({x}_0,{y}_0\right)=\int w\left(x-{x}_0,y-{y}_0\right)\ {\partial}_if\left(x,y\right)\ {\partial}_jf\left(x,y\right)\ \mathrm{d}x\ \mathrm{d}y $$ for every (x0, y0). Here, w(x − x0, y − y0) is a Gaussian centered at (x0, y0) with user-specified width, but with its tail truncated outside the local region of interest. The dominant orientation \( \overrightarrow{u}=\left(\cos \theta, \sin \theta \right) \) at (x0, y0) is a vector of norm 1 which maximises $$ {u}_i\ {J}_{ij}\left({x}_0,{y}_0\right)\ {u}_j={\left\Vert {\partial}_{\theta }f\right\Vert}^2 $$ i.e. the norm of the directional derivative of f(x0, y0) in the direction of \( \overrightarrow{u} \). It is the normalised eigenvector of the largest eigenvalue of the structure tensor matrix Jij(x0, y0). The value of the orientation θ is then colour coded according to the scale shown on the right margin of Fig. 1b. To obtain the average-orientation images, each 1600 × 1200-pixel image was subdivided into a 50 × 38 grid, each subdivision being 32 × 32 pixels (139.2 × 139.2 μm2) in size. We label the subdivisions with a pair of integers (k, ℓ), where 1 ≤ k ≤ lx = 50, and 1 ≤ ℓ ≤ ly = 38. The index k labels the columns of the grid from left to right, while ℓ labels the rows of the grid from top to bottom. The orientation θkℓ of the subdivision (k, ℓ) was then obtained by setting a Gaussian window of the size of 32 × 32 pixels in OrientationJ. Correlation functions of average orientation The correlation functions of average orientation are computed as follows. The correlation between region (i, j) and region (k, ℓ) is defined as Ci, j, k, ℓ = cos 2(θij − θkℓ). The total correlation between region (i, j) and all other regions at distance d from region (i, j) is calculated as follows: $$ {C}_{i,j}(d)=\sum \limits_{{\left(i-k\right)}^2+{\left(j-\ell \right)}^2={d}^2}{C}_{i,j,k,\ell } $$ The correlation function C(d) is computed as the average of this total correlation over all regions: $$ C(d)=\frac{1}{l_x{l}_y}\sum \limits_{i=0}^{l_x-1}\sum \limits_{j=0}^{l_y-1}{C}_{i,j}(d) $$ The distance value d50 is defined as C(d50) = 0.5. Note that the distance d in these expressions is given in units of 32 pixels (139.2 μm) so it must be multiplied by 139.2 μm to convert to physical units. Cell movies To obtain high-resolution images, we used the 35-mm diameter glass-bottom dishes (Iwaki Co., Ltd., Tokyo, Japan) along with a high numerical aperture objective lens. The smaller size of the glass dishes allowed us to place up to three dishes simultaneously inside a temperature- and humidity-controlled microscope (BZ-X700, KEYENCE, Osaka, Japan), enabling continued observation of multiple cell cultures incubating under identical conditions. The glass-bottom culture dishes were collagen-coated following the procedure described above from the 10.0 μg/mL collagen type-I solution. Uncoated controls were prepared using only the vehicle solution (1 mM HCl) in the same procedure. NB1RGB cells (6.0 × 104cells/35-mm diameter dish) were seeded in the glass-bottom dishes and cultured for 90 h while taking time-laps images at 15-min intervals using a microscope (BZ-X700, KEYENCE, Osaka, Japan) (see Supplementary Videos 1 and 2). The VW-H2MA motion analyser (KEYENCE), which performs cell tracking, was used to measure the cell migration velocity. The displacement of each isolated cell during each 15-min interval was measured, from which the average velocity of each cell during that time-interval (which is essentially the instantaneous velocity due to the cells moving slowly) was determined (Fig. 2a). This velocity was averaged over 20 cells. The instantaneous velocities for low- and high-density conditions were respectively based on data from 0 to 30 h and 60 to 90 h after culture start. The 10-h-average velocity, shown in Fig. S3D, is based on the linear displacement during each 10 h interval. This velocity was averaged over 30 cells. The 10-h average velocities for low- and high-density conditions were respectively based on data from 0 to 30 h and 60 to 90 h after culture start (Fig. S3E). The area S and perimeter L of the NB1RGB cells were measured using ImageJ after culturing for 24 h, and circularity was calculated as 4πS/L2 (Fig. 2e, f). Mathematical model To obtain insight into the role of collagen in the 2D patterns formed by fibroblasts, we introduced a simple mathematical model of cell collective motion. The model equation is given as: $$ \frac{\mathrm{d}{x}_i}{\mathrm{d}t}=v\cos {\theta}_i+{R}_x(i) $$ $$ \frac{\mathrm{d}{y}_i}{\mathrm{d}t}=v\sin {\theta}_i+{R}_y(i) $$ $$ \frac{\mathrm{d}{\theta}_i}{\mathrm{d}t}=K\sum \limits_j\exp \left(-\frac{r_{ij}^2}{2{\lambda}^2}\right)\sin 2\left({\theta}_j-{\theta}_i\right)+\sqrt{\mu }{\xi}_i $$ where the variables (xi(t); yi(t)) and θi(t) are the position and the orientation of cell i at time t, respectively. In this model, cell i migrates spontaneously in the direction θi(t) with constant speed v. The terms Rx and Ry denote the repulsive force due to cell–cell excluded-volume interactions described as a Gaussian soft-core potential \( H(r)={\sigma}^2{K}_C\exp \left(-\frac{r^2}{2{\sigma}^2}\right) \) with interaction strength Kc, distance r between cells, and repulsion length σ. Explicitly, the terms Rx and Ry are given as: $$ {R}_x(i)=-\frac{\partial }{\partial {x}_i}{\sum}_jH\left({r}_{ij}\right)={K}_c{\sum}_j\left({x}_{\mathrm{i}}-{x}_j\right)\exp \left(-\frac{r_{ij}^2}{2{\sigma}^2}\right) $$ $$ {R}_y(i)=-\frac{\partial }{\partial {y}_i}{\sum}_jH\left({r}_{ij}\right)={K}_c{\sum}_j\left({y}_{\mathrm{i}}-{y}_j\right)\exp \left(-\frac{r_{ij}^2}{2{\sigma}^2}\right) $$ where rij stands for the distance between cell i and cell j. Equation (3) describes the nematic interaction with strength K between the cells that leads to the nematic alignment of cell orientations. A similar interaction was considered by Sumino Y. et al. [7]. The effective interaction strength in Eq. (3) is \( K\exp \left(-\frac{r_{ij}^2}{2{\lambda}^2}\right) \) with characteristic interaction length λ. An additive noise is also introduced in Eq. (3), where μ is the noise strength and ξi(t) is white noise with zero mean and unit variance, i.e., 〈ξi(t)ξj(s)〉 = δijδ(t − s). Fixed parameters The interaction lengths σ and λ are both fixed to σ = λ = 0.696 units (70 μm) in all simulations. We take 140 μm to be the typical cell length, cf. Figure 2c, d, and have chosen σ and λ to be ½ of this value. Except when the excluded volume effect is turned off for Fig. 4, case (vi) (KC = 0), the soft-core interaction strength is taken to be KC = 0.89/h. This value was found via trial and error to reproduce the experimental characteristics well. The continuous-time model of Eqs. (1)–(3) was discretised for each cell using Euler's method with a step size of 30 min, for a simulation length of 144 h (6 days), resulting in 144 × 2 + 1 = 289 frames. To match the experimental data, each frame's dimensions were adjusted to 69.6 × 52.2 units, where 1 unit is 100 μm. Instead of considering periodic boundaries, which would not match the experimental conditions, we added 15-unit wide margins to the simulation frames. Therefore, the actual frame dimensions for the simulation were 99.6 × 82.2 units, within which cells were initially placed. When creating the simulation video and while taking quantitative measurements for comparison with the experimental results, these margins were ignored. The initial number of cells is N0 = 3434, which corresponds to the initial cell density in the experiments, ~ 8000 cells/cm2. All the simulated cells were initialised with random positions (xi(0), yi(0)) in the window and with a random orientation θi(0) in the range [0, 2π). Based on Fig. 2b, we assume that the increase in the cell number induced by cell divisions starts at t = 24 h and is set to 1.005 cells/0.5 h so that approximately 11,000 cells result after 144 h of cultivation. Each new cell is divided from a randomly chosen existing cell, and positioned 0.35 units (35 μm) away from the parent cell, either in front or behind on the parent's line of motion with equal probability, and with the same orientation and direction of motion. The distance of 35 μm (=σ/2) was chosen so that the position of the offspring will initially lie within the length of the parent, cf. Fig. 2c, d. We do not take into account cell cycles; i.e. each cell proliferates irrespective of its proliferation history. We expect that the detail of the proliferation rule does not considerably affect our results since the cell number is large enough for random and cyclic proliferation rules to be statistically almost equivalent. GIF animations from the 289 frames were generated using Gnuplot. The block-average orientation θkℓ of the cells in region (k, ℓ) is defined as the exponent appearing in the equation below: $$ {R}_{k\mathit{\ell}}{e}^{i2{\theta}_{k\mathit{\ell}}}=\frac{1}{N_{k\mathit{\ell}}}\sum \limits_{j=0}^{N_{k\mathit{\ell}}-1}{e}^{i2{\theta}_j} $$ where i is the square root of − 1, and Nkℓ is the number of cells in region (k, ℓ). Correlation data were then measured at t = 6, 24, 48, 72, 96, 120, and 144 h, following the protocol described above for the experimental observations. To correct for the effects of randomness in the simulation, ten simulations with the same parameters were run and averaged data were reported as the simulation results. Dependence of coherence length on migration speed The simulations were conducted for the following three cases with regard to the constant speed v: The measured migration velocity at low density on an uncoated dish: v = 0.1864 unit/h (1 unit =100 μm, cf. Fig. 2a), The measured migration velocity at low density on a coated dish: v = 0.1598 unit/h (cf. Fig. 2a), The average of the above two: v = 0.1731 unit/h. The dependence of the coherence length on these selections is shown in Fig. 3g. The solid, dashed, and dotted lines are the average d50 values over ten simulations for cases (1), (2), and (3), respectively. The error bars are the standard deviation of ten runs for case (1). Statistical analyses The data were analysed with the one-tailed Student's t test. The values were expressed as the mean ± mean standard error. Changes were considered to be significant if the p value from the Student's t test was less than 0.05. Weijer, C.J.: Collective cell migration in development. J. Cell Sci. 122, 3215–3223 (2009) Ilina, O., Friedl, P.: Mechanisms of collective cell migration at a glance. J. Cell Sci. 122, 3203–3208 (2009) Devreotes, P.: Dictyostelium discoideum: a model system for cell-cell interactions in development. Science 245, 1054–1058 (1989) Article ADS Google Scholar Carter, S.B.: Haptotaxis and the mechanism of cell motility. Nature 213, 256–260 (1967) Lauffenburger, D.A., Horwitz, A.F.: Cell migration: a physically integrated molecular process. Cell 84, 359–369 (1996) Li, L., He, Y., Zhao, M., Jiang, J.: Collective cell migration: implications for wound healing and cancer invasion. Burns Trauma 1, 21–26 (2013) Sumino, Y., Nagai, K.H., Shitaka, Y., Tanaka, D., Yoshikawa, K., Chate, H., Oiwa, K.: Large-scale vortex lattice emerging from collectively moving microtubules. Nature 483, 448–452 (2012) Nishiguchi, D., Nagai, K.H., Chate, H., Sano, M.: Long-range nematic order and anomalous fluctuations in suspensions of swimming filamentous bacteria. Phys. Rev. E 95, 020601 (2017) Popp, D., Yamamoto, A., Iwasa, M., Maeda, Y.: Direct visualization of actin nematic network formation and dynamics. Biochem. Biophys. Res. Commun. 351, 348–353 (2006) Kawaguchi, K., Kageyama, R., Sano, M.: Topological defects control collective dynamics in neural progenitor cell cultures. Nature 545, 327–331 (2017) Noyes, W.F.: Studies on the human wart virus. II. Changes in primary human cell cultures. Virology 25, 358–363 (1965) Elsdale, T.R.: Parallel orientation of fibroblasts in vitro. Exp. Cell Res. 51, 439–450 (1968) Duclos, G., Garcia, S., Yevick, H.G., Silberzan, P.: Perfect nematic order in confined monolayers of spindle-shaped cells. Soft Matter 10, 2346–2353 (2014) Li, X., Balagam, R., He, T.F., Lee, P.P., Igoshin, O.A., Levine, H.: On the mechanism of long-range orientational order of fibroblasts. Proc. Natl. Acad. Sci. U. S. A. 114, 8974–8979 (2017) Peruani, F., Deutsch, A., Bar, M.: Nonequilibrium clustering of self-propelled rods. Phys. Rev. E 74, 030904 (2006) Li, W., Fan, J., Chen, M., Guan, S., Sawcer, D., Bokoch, G.M., Woodley, D.T.: Mechanism of human dermal fibroblast migration driven by type I collagen and platelet-derived growth factor-BB. Mol. Biol. Cell 15, 294–309 (2004) Meister, P., Konrad, E., Hohne, N.: Incidence and histological structure of the storiform pattern in benign and malignant fibrous histiocytomas. Virchows Arch. A Pathol. Anat. Histol. 393, 93–101 (1981) Nakamura, I., Kariya, Y., Okada, E., Yasuda, M., Matori, S., Ishikawa, O., Uezato, H., Takahashi, K.: A novel chromosomal translocation associated with COL1A2-PDGFB gene fusion in dermatofibrosarcoma protuberans: PDGF expression as a new diagnostic tool. JAMA Dermatol. 151, 1330–1337 (2015) Puspoki, Z., Storath, M., Sage, D., Unser, M.: Transforms and operators for directional bioimage analysis: a survey. Adv. Anat. Embryol. Cell Biol. 219, 69–93 (2016) Yang, Y., Jamilpour, N., Yao, B., Dean, Z.S., Riahi, R., Wong, P.K.: Probing leader cells in endothelial collective migration by plasma lithography geometric confinement. Sci. Rep. 6, 22707 (2016) Vitorino, P., Meyer, T.: Modular control of endothelial sheet migration. Genes Dev. 22, 3268–3281 (2008) Tojkander, S., Gateva, G., Lappalainen, P.: Actin stress fibers--assembly, dynamics and biological roles. J. Cell Sci. 125, 1855–1864 (2012) Rottner, K., Stradal, T.E.: Actin dynamics and turnover in cell motility. Curr. Opin. Cell Biol. 23, 569–578 (2011) Ridley, A.J., Hall, A.: The small GTP-binding protein rho regulates the assembly of focal adhesions and actin stress fibers in response to growth factors. Cell 70, 389–399 (1992) Hall, A.: Rho GTPases and the actin cytoskeleton. Science 279, 509–514 (1998) Amano, M., Chihara, K., Kimura, K., Fukata, Y., Nakamura, N., Matsuura, Y., Kaibuchi, K.: Formation of actin stress fibers and focal adhesions enhanced by rho-kinase. Science 275, 1308–1311 (1997) Nobes, C.D., Hall, A.: Rho, rac, and cdc42 GTPases regulate the assembly of multimolecular focal complexes associated with actin stress fibers, lamellipodia, and filopodia. Cell 81, 53–62 (1995) Rottner, K., Hall, A., Small, J.V.: Interplay between Rac and Rho in the control of substrate contact dynamics. Curr. Biol. 9, 640–648 (1999) Kageyama, Y., Doi, T., Akamatsu, S., Kuroyanagi, G., Kondo, A., Mizutani, J., Otsuka, T., Tokuda, H., Kozawa, O., Ogura, S.: Rac regulates collagen-induced HSP27 phosphorylation via p44/p42 MAP kinase in human platelets. Int. J. Mol. Med. 32, 813–818 (2013) Igata, T., Jinnin, M., Makino, T., Moriya, C., Muchemwa, F.C., Isqahihara, T., Ihn, H.: Up-regulated type I collagen expression by the inhibition of Rac1 signaling pathway in human dermal fibroblasts. Biochem. Biophys. Res. Commun. 393, 101–105 (2010) Dallon, J.C., Sherratt, J.A.: A mathematical model for fibroblast and collagen orientation. Bull. Math. Biol. 60, 101–129 (1998) Dallon, J.C., Sherratt, J.A., Maini, P.K.: Mathematical modelling of extracellular matrix dynamics using discrete cells: fiber orientation and tissue regeneration. J. Theor. Biol. 199, 449–471 (1999) Painter, K.J.: Modelling cell migration strategies in the extracellular matrix. J. Math. Biol. 58, 511–543 (2009) Vicsek, T., Czirok, A., Ben-Jacob, E., Cohen, I.I., Shochet, O.: Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75, 1226–1229 (1995) Article ADS MathSciNet Google Scholar Chate, H., Ginelli, F., Gregoire, G., Peruani, F., Raynaud, F.: Modeling collective motion: variations on the Vicsek model. Eur. Phys. J. B 64, 451–456 (2008) Kuramoto, Y., Chemical Oscillations, Waves, and Turbulence. Springer, Berlin (1984). Sugimura, K., Kori, H.: Exponential system-size dependence of the lifetime of transient spiral chaos in excitable and oscillatory media. Phys. Rev. E 92, 062915 (2015) Mayor, R., Etienne-Manneville, S.: The front and rear of collective cell migration. Nat. Rev. Mol. Cell Biol. 17, 97–109 (2016) Theveneau, E., Marchant, L., Kuriyama, S., Gull, M., Moepps, B., Parsons, M., Mayor, R.: Collective chemotaxis requires contact-dependent cell polarity. Dev. Cell 19, 39–53 (2010) Devenport, D.: The cell biology of planar cell polarity. J. Cell Biol. 207, 171–179 (2014) Aw, W.Y., Devenport, D.: Planar cell polarity: global inputs establishing cellular asymmetry. Curr. Opin. Cell Biol. 44, 110–116 (2017) Sugimura, K., Kori, H.: A reduced cell-based phase model for tissue polarity alignment through global anisotropic cues. Sci. Rep. 7, 17466 (2017) This work was supported by grants from the Leading Graduate School Promotion Center, Ochanomizu University. We are grateful to Dr. Khayrul Bashar, Dr. Kyogo Kawaguchi, and Dr. Kyohei Shitara for their helpful advice. We are also grateful to Dr. Daiki Nishiguchi for providing the ImageJ script for automatic image analysis. K.Y. acknowledges support from the Leading Graduate School Promotion Center of Ochanomizu University for a long-term stay at Virginia Tech; she thanks Professor John J. Tyson and his Computational Cell Biology Lab at Virginia Tech for their warm hospitality and the valuable advice she received while this work was conducted. M.G. acknowledges the financial support from JSPS KAKENHI Grant No. 26860144. H.K. acknowledges the financial support from MEXT KAKENHI Grant No. 15H05876 and JSPS KAKENHI Grant No. 18K11464. Kei Hashimoto, Kimiko Yamashita, and Kanako Enoyoshi contributed equally to this work. Graduate School of Humanities and Sciences, Ochanomizu University, Ohtsuka, Bunkyo-ku, Tokyo, Japan Kei Hashimoto, Kimiko Yamashita, Kanako Enoyoshi & Hiroshi Kori Program for Leading Graduate Schools, Ochanomizu University, Ohtsuka, Bunkyo-ku, Tokyo, Japan Kei Hashimoto, Kimiko Yamashita, Kanako Enoyoshi & Xavier Dahan Institute for Human Life Innovation, Ochanomizu University, Ohtsuka, Bunkyo-ku, Tokyo, Japan Kei Hashimoto & Mari Gotoh Physics Division, National Center for Theoretical Sciences, Hsinchu, Taiwan Kimiko Yamashita Department of Physics, National Tsing Hua University, Hsinchu, Taiwan Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, 100049, China Institute for Excellence in Higher Education, Tohoku University, Sendai, Japan Xavier Dahan Department of Physics, Virginia Tech, Blacksburg, VA, 24061, USA Tatsu Takeuchi Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Japan Hiroshi Kori Kei Hashimoto Kanako Enoyoshi Mari Gotoh Correspondence to Hiroshi Kori. ESM 1 (DOCX 1528 kb) (MPEG 15916 kb) (MPEG 7888 kb) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Hashimoto, K., Yamashita, K., Enoyoshi, K. et al. The effects of coating culture dishes with collagen on fibroblast cell shape and swirling pattern formation. J Biol Phys 46, 351–369 (2020). https://doi.org/10.1007/s10867-020-09556-3 Pattern formation Cell population
CommonCrawl
Hilbert 2nd problem a history of the mathematics preceding and relevant to Hilbert's statement of his 2nd problem, initiating his program for the foundations of mathematics -- see Hilbert problems By about 1820, mathematicians had developed deductively a large part of analysis using the real numbers and their properties as a starting point. During the 50 years that followed, in a program that came to be known as the Arithmetization of analysis, Bolzano, Cauchy, Weierstrass, Dedekind, Cantor, and others succeeded in "reducing" analysis to the arithmetic of natural numbers $\mathbb{N}$. Dedekind himself expressed this as follows:[1] ... every theorem of algebra and higher analysis, no matter how remote, can be expressed as a theorem about natural numbers, -- a declaration I have heard repeatedly from the lips of Dirichlet. In the final three decades of the 19th century, efforts were underway to axiomatize the whole of mathematics.[2] It thus became clear that (with the aid of a certain amount of set theoretic and logical apparatus) the entire body of traditional pure mathematics could be constructed rigorously starting from the theory of natural numbers. These efforts proceeded piecemeal and depended greatly on concurrent developments in logic. Major contributors were these: in logic: Boole, Peirce, and Frege in set theory: Cantor and Dedekind in arithmetic: Frege, Dedekind, and Peano in geometry: Pasch and Hilbert In a 1900 lecture to the International Congress of Mathematicians in Paris, David Hilbert presented a list of open problems in mathematics. The 2nd of these problems, known variously as the compatibility of the arithmetical axioms and the consistency of arithmetic, served as an introduction to his program for the foundations of mathematics. The article views the 30-year period from 1872 to 1900 as historical background to Hilbert's program for the foundations of mathematics. There are other, different and equally interesting views of this same period: as a continuation and, indeed, culmination of the previous half-century (1822-1872) during which "mathematicians restored and surpassed the standards of rigour" that had long been established, but then neglected, the whole 80-year period called "the formalisation of mathematics."[3] as the first half of the decades-long effort (1872-193X) "from the days of Cantor and Dedekind in the 1870s, through Russell in the 1900s, to the work of Godel in the 1930s" that resulted in the solid establishment of "the modern discipline of foundations."[4] However viewed, this 30-year period, from the construction of the real numbers to the Hilbert Problems address, saw "mathematicians of the first rank" engaged with these questions:[5] the character of the infinite the relationship between logic and arithmetic the status of geometry the nature of mathematics itself For a history of the subsequent development of Hilbert's program for the foundations of mathematics, which was initiated by his 2nd problem, see the article Hilbert program. 1 Non-mathematical issues 2 Introduction of infinite sets 3 Early development of mathematical logic 4 The algebra of logic 4.1 Syllogistic logic 4.2 Peacock's and De Morgan's contributions 4.3 Boole's algebra of logic 4.4 Jevons and De Morgan's extensions 4.5 C S Peirce's logic 4.6 Two curious aspects of Grundlagen 5 Hilbert's 2nd problem 7 Primary sources Non-mathematical issues As is the case for other, especially older programs and periods of mathematics, the history of Hilbert's program was complicated by non-mathematical issues.[6] Some authors were slow to publish their results; others published only selectively, leaving some important results to be published by students and successors. The works of still others, though published, were partially or completely ignored. As a first example, consider the work of Galileo. His concerns about the "paradoxical" property of infinite sets are often mentioned in published discussions of the potentially infinite and the actually infinite. Yet, even today, doubts are expressed about whether or not Galileo had influence either on Cantor, the mathematician whose name is most often and most closely associated with the notion of infinite sets, or on any other mathematician.[7] Further, consider Gauss' well-known comment about actual infinities and Cantor's "answer" -- made 55 years later: Gauss (in a letter dated 1831): I protest against the use of infinite magnitude as something completed, which in mathematics is never permissible. Infinity is merely a facon de parler, the real meaning being a limit which certain ratios approach indefinitely near, while others are permitted to increase without restriction. Cantor (in an article dated 1886): I answered [Gauss] thoroughly, and on this point did not accept the authority of Gauss, which I respect so highly in all other areas ... There is some doubt even today about whether we really understand either Gauss' comment or Cantor's response.[8] Again, consider the work of Bolzano. His paper "Paradoxes of the Infinite" contains some remarkable results related to the theory of infinite sets:[9] the word "set" appears here for the first time examples of 1-1 correspondences between the elements of an infinite set and the elements of a proper subset Yet Bolzano himself never published these results. The paper itself was not published until 1851, three years after his death, by one of his students. Further, Cantor appears not to have become aware of Bolzano's paper until 1882, some years after he began his own work on infinite sets, which was motivated by the Arithmetization of analysis. Nor did Cantor mention Bolzano's paper in his own work until 1883[10] A related historical anomaly is that while Bolzano both knew of and referred to Galileo's work on the infinite, Cantor did neither.[11] C S Peirce may hold the record in this regard, having made the following "discoveries in formal logic and foundational mathematics, nearly all of which came to be appreciated only long after he died":[12] In 1860, years before Cantor, "he suggested a cardinal arithmetic for infinite numbers." In 1880–81, anticipating Sheffer by 33 years, he invented the Peirce arrow, a binary operator (logical NOR: "(neither) … nor …") sufficient in itself for Boolean algebra. In 1881, he set out the axiomatization of natural number arithmetic, a few years before Dedekind and Peano. In the same paper, years before Dedekind, he gave the first purely cardinal definition of a "Dedekind-finite" set and an (implied) formal definition of a "Dedekind-infinite" set, i.e., one that can be put into a one-to-one correspondence with one of its proper subsets. In 1885, he distinguished between first-order and second-order quantification In the same paper, anticipating Zermelo by about two decades, he set out what can be read as a (primitive) axiomatic set theory. The achievements of Christine Ladd-Franklin in the algebra of logic provide a further, unsettling example of outstanding work not only ignored in the past by logicians, but also forgotten in the present by historians of logic:[13] In 1883, while a student of C. S. Peirce at Johns Hopkins University, Christine Ladd-Franklin published a paper titled "On the Algebra of Logic," in which she develops an elegant and powerful test for the validity of syllogisms that constitutes the most significant advance in syllogistic logic in two thousand years. That her work was overlooked probably resulted, in part, from the simple fact that she was a woman. The extent to which the lack of attention to Ladd-Franklin's work was gender-related is evidenced by this additional, unhappy fact:[14] Ladd-Franklin had completed all the requirements for the Ph.D. at Johns Hopkins University by 1882, [but] since the university did not admit women, she was not awarded the degree until 1926. As a final example, consider that Frege's work "seems to have been largely ignored by his contemporaries."[15][16] Three [of six] reviews of the "revolutionary" Begriffsschrift," including one by no less than Venn, show that their authors were either uninterested in Frege's innovations or had completely misunderstood them. The Grundlagen only received a single review, and that one was "a devastatingly hostile" one, by Cantor, whose ideas were [ironically] the closest to Frege's. Die Grundgesetze der Arithmetik … except for one review by Peano, was ignored by his contemporaries. It was not until Russell acknowledged Frege's work as the trailblazing foundation for the Principia that the greatness of his accomplishment was recognized.[17] Russell himself contrasted the greatness of Frege's contributions with the limited nature of his influence among his contemporaries as follows:[18] In spite of the epoch-making nature of [Frege's] discoveries, he remained wholly without recognition until I drew attention to him in 1903. As a consequence of these and other non-mathematical issues, some mathematical results in the period under examination were achieved multiple times, albeit in slightly different forms or using somewhat different methods, by different authors. Even without the effects of such issues, the mathematics of the past (both long- and recent-past) is still replete with achievements that are said to be "roughly" or "more or less" or "just about" what we know today. About De Morgan's work on mathematical induction, for example, two types of claims have been made: that he put a process that had been used without clarity on a rigorous basis[19] that he introduced and defined the term "mathematical induction" itself[20] Yet, another source, citing the contents of De Morgan's own published papers, has refuted both of these claims.[21] Finally, a note on some quasi-mathematical matters that are purposely not discussed in this article. Without doubt, positions in the philosophy of mathematics known as Logicism, Formalism, and Intuitionism, along with important methodological and epistemological considerations, grew out of the mathematical practice of the late 19th and early 20th centuries.[22] Further, these philosophical positions were of great interest to some mathematicians and certainly influenced the mathematical problems on which they chose to work. Yet a discussion of either the past origins of or the current nature and status of these philosophical positions would not significantly aid our understanding of the mathematics that resulted from the work of those mathematicians.[23] Much of the existing literature [of the period surrounding the Hilbert Problems address] has been philosophically motivated and preoccupied with the exegesis of individual thinkers, notably Frege and Russell, who are widely (and rightly) viewed as founding giants of analytical philosophy. But the wider mathematical context has in the process often been lost from sight. A discussion of these philosophical positions is here omitted. It is that "wider mathematical context" on which this article focuses. Introduction of infinite sets In mathematics, uses of infinity and the infinite (and great concerns about those uses) are as old as Grecian urns. Greek mathematicians followed Aristotle in dividing such uses into two major types, one called "potential infinity", the other called "actual infinity."[24] With respect to magnitudes:[25] a potential infinity was something endlessly extendible, and yet forever finite; an actual infinity was something such as the number of points on a line. Similarly, with respect to sets: a potentially infinite set was, for example, a finite collection of numbers that can be enlarged as much as one wished an actually infinite set was, for example, the complete collection of all such natural numbers Ancient Greek mathematicians developed rigourous methods for using potential infinities. However, with the apparent exception of Archimedes noted below, they avoided using actual infinities.[26] Important early examples of uses of infinity and the infinite include these: Euclid skirted the notion of the actually infinitely large in proving that the primes are potentially infinite. This is how he stated his theorem:[27] Prime numbers are more than any assigned magnitude of prime numbers. Archimedes, however, appears to have investigated actually infinite numbers of objects:[28] ... certain objects, infinite in number, are "equal in magnitude" to others [implying] that not all such objects, infinite in number, are so equal. ... [thus] infinitely many objects [of] definite, and different magnitudes … are manipulated in a concrete way, apparently by something rather like a one-one correspondence... Oresme, an early (12th century) mathematician, examined infinite sets using a method prescient of Cantor's method of one-to-one correspondence. Oresme demonstrated that two actually infinite sets (the set of odd natural numbers and the set of all natural numbers) could be "different" and "unequal" and yet "equinumerous" with one another. He concluded that notions of equal, greater, and less do not apply to the infinite.[29] Mathematical induction, as a technique for proving the truth of propositions for an infinite (indefinitely large) number of values, was used for hundreds of years before any rigorous formulation of the method was made[30] Galileo produced the standard one-to-one correspondence between the positive integers and their squares, reminiscent of Oresme's work. He termed this a "paradox" that results "unavoidably" from the property of infinite sets and concluded, alike with Oresme, that infinite sets are incomparable.[31] ... the totality of all numbers is infinite, and ... the number of squares is infinite; neither is the number of squares less than the totality of all numbers, nor the latter greater than the former; and, finally, the attributes "equal", "greater", and "less" are not applicable to the infinite, but only to finite quantities. As recently as 1831, Gauss himself argued against the actually infinite:[32] I protest against the use of infinite magnitude as something completed, which in mathematics is never permissible. Infinity is merely a facon de parler, the real meaning being a limit which certain ratios approach indefinitely near, while others are permitted to increase without restriction. For the most part, however, mathematicians of the 19th and 20th centuries developed and readily took up methods for using actual infinities that were as rigorous as those the Greeks developed for potential infinities.[33] Certainly Bolzano had no concerns about the "paradoxical" property of infinite sets. Indeed, his theories of mathematical infinity anticipated Cantor's theory of infinite sets. His contribution to the understanding of the nature of the infinite was threefold:[34] 1. he defined the idea of a set I call a set a collection where the order of its parts is irrelevant and where nothing essential is changed if only the order is changed. 2. he argued that the infinite set does exist if the integers are a set, then arbitrarily large subsets of integers are subsets of the set of integers, which must itself be actually infinite 3. he gave examples to show that, unlike for finite sets, the elements of an infinite set could be put in 1-1 correspondence with elements of one of its proper subsets. The actual infinite is said to have entered algebra in the 1850s in Dedekind's work with quotient constructions for modular arithmetic:[35] [T]he whole system of infinitely many functions of a variable congruent to each other modulo $p$ behaves here like a single concrete number in number theory.… The system of infinitely many incongruent classes—infinitely many, since the degree may grow indefinitely—corresponds to the series of whole numbers in number theory. The five-year period 1868–1872 has been called "the birth of set-theoretic mathematics." A salient milestone was 1871, when Dedekind introduced "an essentially set-theoretic viewpoint … using set operations and … structure-preserving mappings … and terminology that Cantor was later (1880) to use in his own work.[36] By 1872, procedures involving infinite sets were employed in constructions of irrational real numbers developed during the Arithmetization of analysis by Weierstrass, Dedekind, and Cantor. "Thus analysis [had been reduced] not simply to the theory of natural numbers, but to the theory of natural numbers together with the theory of infinite sets."[37] The constructions of Cantor and Dedekind especially relied implicitly on set theory and, further, "involve the assumption of a Power Set principle."[38] The realization that (apparently) all the material needed for analysis could be constructed out of the natural numbers using set-theoretic means led to these new questions:[39] What further could be said about set-theoretic procedures and assumptions of logic, both of which underlay these accounts of the real numbers? Do we have to take the natural numbers themselves as simply given, or can anything further be said about those numbers, perhaps by reducing them to something even more fundamental? In the 1870s, the notions of set and class themselves appeared straightforward. Their problematic aspects did not become apparent until Cantor's theory of transfinite numbers gave rise to various paradoxes of set theory.[40] Early development of mathematical logic The history of logic has been described, "with some slight degree of oversimplification," as having three stages: (1) Greek logic, (2) Scholastic logic, and (3) mathematical logic.[41] From ancient times through the first half of the 19th century, the state of logic was as follows:[42][43] logic was understood to be "the laws of thought" the Aristotelian syllogism was the ultimate form of all reasoning the logic that a mathematician used did not affect the mathematics that she did The mathematical context in which logic developed played a role in its shaping. The broad motives behind its development started a two-phased movement:[44] initially, there was a great expansion in the scope of logic subsequently, a progressive restriction occurred Both the initial expansion and the subsequent restriction of logic were linked to work in the foundations of mathematics. The initial expansion of the scope of mathematical logic began during the second half of the 19th century with these two steps: the algebraization of syllogistic logic the development of the predicate calculus Taken together, these steps accomplished the following: they extended the use of symbolism "beyond the subject matter of mathematics, to the reasoning used in mathematics."[45] they provided "the technical basis for … the transition from informal to formal proof."[46] Looking back, these developments may seem to us almost natural, perhaps because we know of their beneficial results. Somewhat concurrently, the scope of mathematical logic expanded a step further to include the theories of sets and of relations.[47] This further development, however, was accompanied by highly unexpected and seriously problematic consequences. The algebra of logic The beginning of mathematical logic has been dated from the years in which Boole and De Morgan published their works on the algebraization of Aristotelian logic.[48] Whereas in [Greek and Scholastic logic] theorems were derived from ordinary language, [mathematical logic] proceeds in a contrary manner—it first constructs a purely formal system, and only later does it look for an interpretation in everyday speech. Thus, with the advent of mathematical logic, the logic of the syllogism came to be treated as one interpretation of a calculus of logic. Syllogistic logic Aristotle's system of syllogistic logic is closely linked to the grammatical structure of natural language.[49] A syllogism is a logical argument consisting (usually) of three statements, one of which (the conclusion) is inferred from the other two (the premises). Here is an Example Syllogism: All $Greeks$ are $Sapiens$ All $Sapiens$ are $Mortal$ All $Greeks$ are $Mortal$ Each statement of this syllogism has two parts: a Subject and a Predicate: the Subject consists of a Quantifier (All, Some, No, or Not All) and a Common Noun the Predicate consists of a Copula Verb (are) and a Common Noun We can think of the Common Nouns in the statements of a syllogism either as expressing properties of things or as referring to classes of things that have those properties. In each syllogism, there is always one Common Noun that occurs in both premises, but not in the conclusion. This Common Noun, which links the two premises of the syllogism, is called the middle term of the syllogism. In the Example Syllogism above, the middle term is the Common Noun "Sapiens". A syllogism is valid if the conclusion follows logically from the premises, no matter what Common Nouns are used in its statements; otherwise, the syllogism is invalid. If the syllogism is valid and the premises are true, then the conclusion is true. The Example Syllogism is valid. Its validity has nothing to do with the particular Common Nouns that are used. If the Common Nouns in the Example Syllogism were replaced by different Common Nouns, the result would still be a valid syllogism. It is the form of the Example Syllogism that makes it valid, not the Common Nouns used in its statements. Replacing the Common Nouns in the Example Syllogism by symbols for classes makes this clear: All $A$ are $B$ All $B$ are $C$ All $A$ are $C$ Each statement of a syllogism is one of 4 types, as follows: A All $A$ are $B$ E No $A$ are $B$ (= All $A$ are not $B$) I Some $A$ are $B$ O Not All $A$ are $B$ (= Some $A$ are not $B$) (In ordinary language, the forms E and O have the alternate forms shown. Note that by introducing an additional Copula Verb are not and using the equivalent forms, we can eliminate the Quantifiers No and Not All, reducing them to just All and Some.) The statements of the Example Syllogism are all of type A and, therefore, the Example Syllogism itself is said to be of type AAA. Aristotle hypothesized that all valid syllogisms have something fundamental in common, which he attempted to find. Efforts to do so continued for two millennia. Finally, in 1883, in a volume edited by Peirce himself, Christine Ladd-Franklin published a paper describing a logical system that sufficed to "capture this common feature."[50] Her system was described as having provided "the definitive solution of the problem of the reduction of syllogism."[51] Peacock's and De Morgan's contributions Even before Boole's work, important steps were taken towards the development of a calculus of logic. As early as 1830, Peacock suggested that the symbols for algebraic objects need not be understood only as numbers.[52][53] 'Algebra' … has been termed Universal Arithmetic: but this definition is defective, in as much as it assigns for the general object of the science, what can only be considered as one of its applications. In his treatise, Peacock distinguished between arithmetical algebra, with laws derived from operations on numbers, and symbolic algebra, which he describes as follows:[54] the science which treats the combinations of arbitrary signs and symbols by means defined through arbitrary laws…. We may assume any laws for the combination and incorporation of such symbols, so long as our assumptions are independent, and therefore not inconsistent with each other. In 1847, De Morgan extended Peacock's vision for a symbolic algebra with the notion that the interpretations of symbols not only for algebraic objects, but also for algebraic operations were arbitrary.[55] De Morgan's contribution to logic was twofold. First, he insisted on the purely formal or, as he put it, "symbolic" nature of algebra, the study of which has as it object "symbols and their laws of combination, giving a symbolic algebra which may hereafter become the grammar of a hundred distinct significant algebras."[56][57] Consider his example of a commutative algebra to which he provided five interpretations, among which are the three listed:[58] Given symbols $M, N, +$, and one sole relation of combination, namely that $M + N$ is the same as $N + M$: $M$ and $N$ may be magnitudes, and $+$ the sign of addition of the second to the first $M$ and $N$ may be numbers, and $+$ the sign of multiplying the first to the second $M$ and $N$ may be nations, and $+$ the sign of the consequent having fought a battle with the antecedent De Morgan's second contribution was to clarify the nature of logical validity as "that part of reasoning which depends upon the manner in which inferences are formed…. Whether the premises be true or false, is not a question of logic…. the question of logic is, does the conclusion certainly follow if the premises be true?"[59] Boole's algebra of logic Both Boole and De Morgan were aware of the limitations of syllogistic logic, in particular, that there were inferences known to be valid, but whose validity could not be demonstrated by syllogistic logic. Their intent was to develop "a general method for representing and manipulating all logically valid inferences."[60] The significant difference in Boole's approach from De Morgan's was the algebraic methods that Boole adopted. In 1847, in "a little book that De Morgan himself recognized as epoch-making," Boole undertook the following:[61][62] the goal: "to express traditional logic more perspicuously using the techniques of algebra" such that deduction becomes calculation the program: to develop an algebraic calculus and show that the doctrines of traditional logic can be expressed using this calculus. In this early work, Boole extended De Morgan's view about the formal nature of algebra by presenting the view that the essential character of the whole of mathematics is formal, somewhat as follows:[63] If any topic is presented in such a way that it consists of symbols and precise rules of operation upon these symbols, subject only to the requirement of inner consistency, this topic is part of mathematics. In 1854, Boole published a second book, the completion of his efforts "to incorporate logic into mathematics by reducing it to a simple algebra, pointing out the analogy between algebraic symbols and those that represent logical forms, and beginning the algebra of logic that came to be called Boolean algebra."[64] Boole eventually gave his uninterpreted calculus three interpretations, in terms of classes, of probabilities, and also of propositions. These various interpretations were possible because of analogies among the concepts of a class, an event, and a statement. As a consequence, the "order" relation in a Boolean algebra can be interpreted variously as set-theoretical inclusion, as causal follow-up of events, as logical follow-up of statements.[65] A modified version of the third interpretation of his calculus became modern propositional logic. This latter is today the lowest level of modern logic, but at the time and in effect, it was all of logic, because it was able to be used for Aristotelian syllogistic logic:[66] it used symbols for statements rather than numbers it defined operations on statements rather than on numbers it defined deductions as equations and as the transformation of equations Boole's second (1854) book was an effort "to correct and perfect" his first. He introduced the formalisms of his algebra, including these:[67] Classes were $x, y, z$. There was a universal class $1$ and an empty class $0$. Multiplication $x \cdot y$ was intersection, yielding $x \cdot y = y \cdot x$. Next, he gave the idempotent law $x \cdot x = x$. Addition $x+y$ was aggregation (for $x, y$ disjoint), yielding $x + y = y+x$ and $z(x + y) = z \cdot x + z \cdot y$. Also, $x − y = − y + x$ and $z(x − y) = z \cdot x − z \cdot y$. Boole did not, however, go on from all of this to build an axiomatic foundation for his algebra of logic. Instead, he introduced three theorems (Expansion, Reduction, Elimination) and used them in his "General Method" for analyzing syllogistic arguments. Boole's introduction of an Elimination theorem is interesting as an example of his commitment to an algebraic approach to logic. As shown in the Example Syllogism above, the middle term of a syllogism is a Common Noun that occurs in both of the premises. In effect, it links the two other Common Nouns of the syllogism, allowing them to be joined in the conclusion. Observing this, Boole reasoned that syllogistic logic produces a conclusion by eliminating that middle term, so he introduced into his algebra of logic an Elimination theorem, which he borrowed from the ordinary algebraic theory of equations.[68] The final version of Boole's method "for analyzing the consequences of propositional premises," briefly stated, is as follows:[69] convert (or translate) the premise statements of the syllogism into equations, apply a prescribed sequence of algebraic processes to the equations, including application of the three theorems mentioned above, yielding equational conclusions re-convert the equational conclusions back into statements, yielding the desired conclusions of the syllogism. With this method Boole had replaced the art of reasoning from premise statements to conclusion statements with a routine mechanical algebraic procedure. Boole showed, with somewhat mixed results, that his algebra provided "an easy algorithm for syllogistic reasoning," an elementary example of which is as follows:[70][71] an Aristotelian syllogism of the AAA type: (1) $\text{All } A \text{ are } B$ $A \cdot B = A$ (2) $\text{All } B \text{ are } C$ $B \cdot C = B$ (3) $A (B \cdot C) = A$ substituting in (1) the value of $B$ given by (2) (4) $(A \cdot B)C = A$ applying associative law for multiplication (5) $\text{All } A \text{ are } C$ $A \cdot C = A$ substituting in (4) the value of $AB$ from (1) No less than De Morgan himself praised Boole's work as a remarkable proof that "the symbolic processes of algebra, invented as tools of numerical calculation, [are] competent to express every act of thought, and to furnish the grammar and dictionary of an all-containing system of logic."[72] Taken at face value, De Morgan's praise overstated the adequacy of Boole's logic of propositions without quantification, in two ways: it was inadequate to express some important statements of mathematics such as the law of mathematical induction, on which De Morgan himself had worked; it was also inadequate to express some statements of ordinary language with a form such as, "If all horses are animals, then all heads of horses are heads of animals." In fact, this example was De Morgan's own, intended "to show the inadequacy of traditional logic" and that, for a logic adequate to express this example, "binary relations are essential."[73] If, however, we take De Morgan's comment to be about some yet-to-be-developed logic with quantification, then we can accept that his optimism about Boole's calculus was not misplaced. Jevons and De Morgan's extensions For three decades after Boole introduced his calculus in 1847, "most researchers interested in formal logic worked on extending and improving [his] system."[74] In 1864, Jevons published an alternative system of algebraic logic, retaining Boole's use of algebraic equations as the basic form of logical statements, but rejecting Boole's desire to retain "dependence on" the ordinary algebra of numbers. More generally, Jevons replaced the use of classes (associated with quantity) with predicates (associated with quality).[75] Both Boole and Jevons understood logic to be an expression of "the laws of thought." Yet Boole had more of an algebraic concept of logic and saw deduction as calculation, while Jevons argued that mathematics proceeds from logic, seeing calculation as deduction.[76] Further on, we shall see that, alike with Jevons, Frege envisioned logic as a predicate-based foundation to mathematics, though his method of realizing this vision was not algebraic, but axiomatic -- see Frege's predicate logic. De Morgan himself extended Boole's calculus with a law of duality that asserts for every theorem involving addition and multiplication, there is a corresponding theorem in which the words addition and multiplication are interchanged.[77] Interpreted as a logic of classes, we have this: If $x$ and $y$ are subsets of a set $S$, then the complement of the union of $x$ and $y$ is the intersection of the complements of $x$ and $y$ the complement of the intersection of $x$ and $y$ is the union of the complements of $x$ and $y$ Interpreted as a logic of propositions, we have this: If $p$ and $q$ are propositions, then not $(p$ or $q)$ equals not $p$ and not $q$ not $(p$ and $q)$ equals not $p$ or not $q$ C S Peirce's logic Peirce was convinced of the general notion that "Mathematics is the science which draws necessary conclusions."[78] Further, he was "committed to the broadly 'algebraic' tradition' of his father, Benjamin, and of his colleague, Boole. It is not surprising that, on reading of Frege's belief that mathematics could be derived from logic, Peirce responded that logic was properly seen as a branch of mathematics, not vice versa.[79] Though De Morgan had clearly located the inadequacy of syllogistic logic in its inability to express binary relations, he himself lacked "an adequate apparatus for treating the subject." The title "creator of the theory of relations" has been awarded to C. S. Peirce.[80] In several papers published between 1870 and 1882, [Peirce] introduced and made precise all the fundamental concepts of the theory of relations and formulated and established its fundamental laws … in a form "much like the calculus of classes developed by G. Boole and W. S. Jevons, but which greatly exceeds it in richness of expression." In a series of papers, Peirce introduced his "claw" symbolism $\prec$ and used it to develop his logic of inferences:[81] he defined $\prec$ as follows: $A \prec B$ is explicitly defined as $A$ implies $B$, and $A \overline{\prec} B$ defines $A$ does not imply $B$. he defined illation (material implication or logical inference) as follows: $A \prec A$, whatever $A$ may be. If $A \prec B$, and $B \prec C$, then $A \prec C$. he distinguished universal and particular propositions, affirmative and negative, according to the following scheme: A All A are B $a \prec b$ E No A is B $a \prec \bar{b}$ I Some A is B $\bar{a} \prec b$ O Some A is not B $\bar{a} \prec \bar{b}$ By means of all the above, Peirce transformed the Aristotelian syllogism into a hypothetical proposition, "with material implication as its main connective." For example, he symbolized the syllogistic form AAA of our Example Syllogism (discussed previously) as follows:[82] If $x \prec y$, and $y \prec z$, then $x \prec z$. Comparing Peirce's formalism above with the remarkably similar formalism of the familiar Peano-Russell notation below $[(x ⊃ y) ⋅ (y ⊃ z)] ⊃ (x ⊃ z)$. it is difficult to conclude other than that "the differences are entirely and solely notational."'"`UNIQ--ref-00000052-QINU`"' Here is a succinct summary of how syllogistic logic was transformed by the algebraic tradition:'"`UNIQ--ref-00000053-QINU`"' * Aristotle's syllogistic logic, entirely linguistic, was a ''logic of terms'' that were connected by a ''copula of existence'', expressing the inherence of a property in a subject; * Boole's formal logic, expressed algebraically, was a ''logic of classes'' that were connected by a ''copula of class inclusion''; * De Morgan's formal logic, also algebraic, was a ''logic of relations'' whose relata were connected by a ''copula of relations''. * Peirce's formal logic was a ''logic of inference'' that took in, combined, and went beyond each of these. His terms (of syllogisms), classes, and propositions were connected by a ''copula of illation''. Subsequently, Peirce extended his logic into a predicate calculus by adding a theory of quantification to his logic of relations -- see [[#Peirce.27s logic of quantifiers|Peirce's logic of quantifiers]]. Peirce's work on (binary) relations and quantification was continued and extended in a very thorough and systematic way by Schröder, whose published work of 1895 was lauded in 1941 as "so far the only exhaustive account of the calculus of relations."'"`UNIQ--ref-00000054-QINU`"' =='"`UNIQ--h-9--QINU`"'Cantor's early theory of sets== Set theory is the study sets, their properties, and the operations that can be performed on them. It has been especially concerned with sets that have infinitly many elements.'"`UNIQ--ref-00000055-QINU`"' Broadly defined, the term ''naive set theory'' connotes an informal set theory developed in a natural language in which such words as "and", "or", "if ... then", "not", "for some", and "for every" are not rigorously defined. The term includes these various versions of set theory:'"`UNIQ--ref-00000056-QINU`"''"`UNIQ--ref-00000057-QINU`"' # Cantor's early (pre-1883 ''Grundlagen'') theory of sets # Cantor's later general theory of sets, the basis of the theory of transfinite numbers # informally developed set theories (axiomatic or otherwise) developed by Dedekind, Peano, and Frege # modern, informally developed versions of an axiomatic set theory, as in ''Naive Set Theory'' by Paul Halmos. This section examines the first of these, namely, Cantor's early theory of sets. ==='"`UNIQ--h-10--QINU`"'Bolzano's contribution=== In spite of Cantor's pre-eminence in the area of set theory, the first to work with sets was Bolzano, as was noted above in [[#Introduction of infinite sets|Introduction of infinite sets]]. It is from him that we have the following early definitions: ::... an embodiment of the idea or concept which we conceive when we regard the arrangement of its parts as a matter of indifference (1847)'"`UNIQ--ref-00000058-QINU`"' ::... an aggregate so conceived that it is indifferent to the arrangement of its parts (1851)'"`UNIQ--ref-00000059-QINU`"' It was also Bolzano who first used the German word ''Menge'' for set, a usage that Cantor himself continued in his theory.'"`UNIQ--ref-0000005A-QINU`"' Despite this, Bolzano's understanding of the notion of set was incomplete, especially with respect to the important distinction between the element/set relation and the part/whole relation. Consider, as evidence, his use of the word "parts" (''Teile'') to refer to the elements of a set in his description given above."'"`UNIQ--ref-0000005B-QINU`"' Further, Bolzano thought absurd consideration of a set with only one element, while he failed entirely to consider the null set.'"`UNIQ--ref-0000005C-QINU`"' Nevertheless, it was Bolzano who identified sets as "the carriers of the property finite or infinite in mathematics."'"`UNIQ--ref-0000005D-QINU`"' ==='"`UNIQ--h-11--QINU`"'Cantor's discoveries=== Traditional views give to Cantor (not entirely undeservedly) all or most of the credit for having developed set theory: ::naive set theory is primarily due to Cantor'"`UNIQ--ref-0000005E-QINU`"' ::the first development of set theory was a naive set theory … created at the end of the 19th century by Georg Cantor.'"`UNIQ--ref-0000005F-QINU`"' ::"For most areas [of mathematics] a long process can usually be traced in which ideas evolve until an ultimate flash of inspiration, often by a number of mathematicians almost simultaneously, produces a discovery of major importance. Set theory however is rather different. It is the creation of one person, Georg Cantor."'"`UNIQ--ref-00000060-QINU`"' ::Set theory, as a separate mathematical discipline, was born in late 1873 in the work of Georg Cantor.'"`UNIQ--ref-00000061-QINU`"' To this needs to be added a nuanced caveat, such as these: ::Cantor's work should be considered as a completion of a long historical process'"`UNIQ--ref-00000062-QINU`"' ::The concept of set is no Athena: school children understand it now; but its development was long drawn out, beginning with the earliest counting and reckoning and extending into the late nineteenth century.'"`UNIQ--ref-00000063-QINU`"' This statement seems a reasonable summary: ::Both the theory of real numbers and the idea of a function depended upon an informal notion of set. Cantor turned the very simple idea of a set into a rich theory which was to become the foundation of modern mathematics.'"`UNIQ--ref-00000064-QINU`"' Even today, it is known that early study of naive set theory and early work with naive sets are useful in mathematics education:'"`UNIQ--ref-00000065-QINU`"' * they aid in developing a facility for working more formally with sets * they aid in understanding the motivation for axiomatic set theory Cantor's first ideas on set theory were contained in papers on trigonometric series, but for the most part he developed the set concept and its theory as a consistent basis for his work with infinite sets.'"`UNIQ--ref-00000066-QINU`"''"`UNIQ--ref-00000067-QINU`"' In 1873, he discovered that the linear continuum is not countable, which he treated as an invitation to investigate the "different sizes of infinity" and the domain of the transfinite.'"`UNIQ--ref-00000068-QINU`"''"`UNIQ--ref-00000069-QINU`"' The following is a brief account of how his discovery came about:'"`UNIQ--ref-0000006A-QINU`"' ::* Cantor, in correspondence with Dedekind, asked the question whether the infinite sets $\mathbb{N}$ of the natural numbers and $\mathbb{R}$ of real numbers can be placed in one-to-one correspondence. ::* Dedekind, in reply, offered a proof of the following: ::::the set $\mathbb{A}$ of all algebraic numbers, the set of all real roots of equations of the form an $x_n + a_{n-1} x_{n-1} + a_{n-2} x_{n-2} + . . . + a_1 x + a_0 = 0$, where $a_i$ is an integer, is denumerable (i.e., there is a one-to-one correspondence with $\mathbb{N}$). ::* Cantor, a few days later, proved that the assumption that $\mathbb{R}$ is denumerable leads to a contradiction, using the Bolzano-Weierstrass principle of completeness. Thus, Cantor showed that "there are more elements in $\mathbb{R}$ than in $\mathbb{N}$ or $\mathbb{Q}$ or $\mathbb{A}$," in this precise sense: ::::the cardinality of $\mathbb{R}$ is strictly greater than that of $\mathbb{N}$. A consequence of all this, Cantor noted, was proving anew an old (1844) result of Liouville's, namely, the existence (in every interval) of (uncountably many) transcendental numbers. In effect, there are in any real interval, more transcendental numbers than algebraic numbers.'"`UNIQ--ref-0000006B-QINU`"''"`UNIQ--ref-0000006C-QINU`"' In 1874, ''Crelle's Journal'' published Cantor's paper reporting this remarkable result and, in doing so, marked the birth of set theory. Previously, all infinite collections were assumed to be of "the same size." Cantor invoked the concept of a 1-to-1 correspondence to show that "there was more than one kind of infinity."'"`UNIQ--ref-0000006D-QINU`"''"`UNIQ--ref-0000006E-QINU`"''"`UNIQ--ref-0000006F-QINU`"' Here is a summary of Cantor's published results involving the early version of his naive set theory:'"`UNIQ--ref-00000070-QINU`"''"`UNIQ--ref-00000071-QINU`"' # in 1874, a proof that the set of real numbers is not denumerable, i.e. is not in one-to-one correspondence with (is not equipollent to) the set of natural numbers. # in 1878, a definition of what it means for two sets $M$ and $N$ to have the same power or cardinal number; namely that they be [[equipollent sets]]. # also in 1878, a proof that the set of real numbers and the set of points in n-dimensional Euclidean space have the same power, using a precisely developed notion of a one-to-one correspondence. Cantor actually achieved this last result -- at the time quite paradoxical -- in 1877, after which he wrote to Dedekind to report it, saying "I see it, but I don't believe it!"'"`UNIQ--ref-00000072-QINU`"' There were others who really didn't believe it! Cantor submitted a paper reporting the result to ''Crelle's Journal''. Kronecker, who had significant influence over what was published in the journal, disliked much of Cantor's set theory and fundamentally disagreed with Cantor's work with infinite sets. The paper was published only after Dedekind intervened on Cantor's behalf.'"`UNIQ--ref-00000073-QINU`"' In 1878, Cantor stated his [[Continuum hypothesis|Continuum Hypothesis]], asserting that every infinite set of real numbers is either countable, i.e., it has the same cardinality as $\mathbb{N}$, or has the same cardinality as $\mathbb{R}$. From that point until 1883, these were the only two infinite powers or cardinal numbers.'"`UNIQ--ref-00000074-QINU`"' In all of these early papers, up to his development of the theory of transfinite numbers in 1883, Cantor's notion of a set was essentially as follows:'"`UNIQ--ref-00000075-QINU`"' ::a set is a collection of elements that constitute the extension of a (mathematical) concept with the further important understanding that ::the concept involved is defined only for objects of some given (mathematical) domain. With the proviso noted, we can make these further points about this:'"`UNIQ--ref-00000076-QINU`"' * Cantor's early notion is the notion of set as it is most often applied in mathematics * the proviso noted ensures that the paradoxes of set theory simply do not arise Finally, in an 1882 paper, Cantor made the following point with respect to what were termed "undecidable" concepts:'"`UNIQ--ref-00000077-QINU`"' ::an algorithm for deciding whether or not the concept determining a set applies to any particular object in the given domain is not needed for the concept to be the basis of a well-defined set. He gives, as an example, the set of algebraic numbers, which (as mentioned above) he himself had determined was countable. This set, Cantor insisted, is well-defined, even though determining whether or not a particular real number is algebraic "may or may not be possible at a given time with the available techiniques." ==='"`UNIQ--h-12--QINU`"'Two presentations of naive set theory=== ''Set theory'' begins with two fundamental notions, ''objects'' and ''sets'' of those objects.'"`UNIQ--ref-00000078-QINU`"' ''Membership'' is a fundamental binary relation between objects $o$ and sets $A$. If $o$ is a ''member'' (or element) of $A$, write $o \in A$. ''Set inclusion'' is a derived binary relation between two sets. If all the members of set $A$ are also members of set $B$, then $A$ is a '''subset''' of $B$, denoted $A \subseteq B$. $A$ is called a '''proper subset''' of $B$ if and only if $A$ is a subset of $B$, but $B$ is not a subset of $A$. Set theory features binary operations on sets, such as these: * ''Union'' of the sets $A$ and $B$, denoted $A \cup B$, is the set of all objects that are a member of $A$, or $B$, or both. * ''Intersection'' of the sets $A$ and $B$, denoted $A \cap B$, is the set of all objects that are members of both $A$ and $B$. * ''Set difference'' of $U$ and $A$, denoted $U \setminus A$, is the set of all members of $U$ that are not members of $A$. When $A$ is a subset of $U$, the set difference $U \setminus A$ is also called the complement of $A$ in $U$. In this case, if the choice of $U$ is clear from the context, the notation $A^c$ is sometimes used instead of $U \setminus A$. * ''Symmetric difference'' of sets $A$ and $B$, denoted $A \bigtriangleup B$ or $A \ominus B$, is the set of all objects that are a member of exactly one of $A$ and $B$ (elements which are in one of the sets, but not in both). It is the set difference of the union and the intersection,$(A \cup B) \setminus (A \cap B)$ or $(A \setminus B) \cup (B \setminus A)$. * ''Cartesian product'' of $A$ and $B$, denoted $A \times B$, is the set whose members are all possible ordered pairs $(a,b)$ where $a$ is a member of $A$ and $b$ is a member of $B$. * ''Power set'' of a set $A$ is the set whose members are all possible subsets of $A$. - - - - - Beginning with the fundamental notions of ''set'' and ''belongs to'' or ''is a member of'' and assuming that sets have properties usually associated with collections of objects, Paul Halmos, in his 1960 text, developed informally an axiomatic set theory that presented the binary relation of set inclusion and the binary operations noted above, as follows:'"`UNIQ--ref-00000079-QINU`"' # ''Axiom of Extension'': Two sets are equal if and only if they have the same elements. # ''Axiom of Specification'': For every set $S$ and every proposition $P$, there is a set which contains those elements of $S$ which satisfy $P$ and nothing else. # ''Axiom of Pairs'': For any two sets there is a set which contain both of them and nothing else. # ''Axiom of Union'': For every collection of sets, there is a set that contains all the elements and only those that belong to at least one set in the collection. # ''Axiom of Powers'': For each set $A$ there is a collection of sets that contains all the subsets of the set $A$ and nothing else. # ''Axiom of Infinity'': There is a set containing $0$ and the $successor$ of each of its elements. # ''Axiom of Choice'': The Cartesian product of a non-empty indexed collection of non-empty sets is non-empty. In addition, there is an axiom stipulating (more or less) that anything intelligent one can do to the elements of a set yields a set: :8. ''Axiom of substitution'': If $S(a,b)$ is a sentence such that for each $a$ in set $A$ the set $\{b: S(a,b)\}$ can be formed, then there exists a function $F$ with domain $A$ such that ::::$F(a) = \{b:S(a,b)\}$ for each $a$ in $A$. An informally developed naive set theory with these axioms is equipped to do the following:'"`UNIQ--ref-0000007A-QINU`"' * develop concepts of ''ordered pair'', ''relation'', and ''function'', and to discusses their properties * discuss ''numbers'', ''cardinals'', ''ordinals'', and their arithmetics, * discuss different kinds of ''infinity'', in particular, the uncountability of the set of real numbers ==='"`UNIQ--h-13--QINU`"'Paradoxes and Cantor's early set theory=== A discussion of paradoxes is relevant in two ways to the theory of sets: * the "paradoxes of the infinite" that had to be overcome before set theory could be developed * the paradoxes that later arose out of the development of set theory itself It is interesting to consider that Cantor succeeded in resolving the "paradoxes of the infinite" and providing a coherent account of cardinal number for infinite multiplicities, while Bolzano, though he made great progress with the use and understanding of sets, failed to do so. Two points have been made to account for this. First, there were notions about which, when applied to infinite sets, Bolzano was confused:'"`UNIQ--ref-0000007B-QINU`"' * the cardinal number of the set of points in an interval * the magnitude of the line interval as a geometric object Cantor accepted for infinite sets what had long been accepted for finite sets, namely, that "the relation of having the same cardinal number is defined in terms of equipollence." Thus, since "the interval (0, 1) of real numbers is equipollent to the 'larger' interval (0, 2)," then these two sets of points, though different in magnitude, nevertheless have the same cardinal number. Bolzano, along many, rejected this.'"`UNIQ--ref-0000007C-QINU`"' Second, Euclid's Common Notion 5, that ''the whole is greater than the part'', was a barrier to working with infinite sets. Euclid's principle does indeed apply to geometric magnitude. It may be that Cantor's clear understanding of the first point above allowed him to see that in the domain of sets, infinite sets are simply a counterexample to Euclid's principle. Whatever the reason, Bolzano did not see this.'"`UNIQ--ref-0000007D-QINU`"' As a final issue, it is worth commenting on the oft-repeated claim that working with naive set theory inexorably leads one to paradoxes. One such claim is the following:'"`UNIQ--ref-0000007E-QINU`"' ::Naïve set theory is intuitive and simple, but unfortunately leads very soon to controversial statements [, because] it relies on an informal understanding of sets as collections of objects, called the elements or members of the set, that is [, it relies] on a predicate indicating that a collection is a set and a relation type symbol to represent set membership. This claim, however, does not apply to Cantor's early theory of sets, which is the naive set theory that we have been examining. Certainly an aspect of set theory (naive or otherwise) that can lead to controversy and paradoxes is the use of ''unrestricted'' predicates (properties/concepts) to determine sets. Cantor's early theory of sets, however, determines sets using ''restricted'' concepts. It is worth repeating here Cantor's early notion of set: ::a set is a collection of elements that constitute the extension of a (mathematical) concept that is defined only for objects of some given (mathematical) domain. Sets determined in accordance with such a notion do not give rise to paradoxes. =='"`UNIQ--h-14--QINU`"'Dedekind's theory of sets== The intention to get rid of geometrical intuitions as a genuine source of mathematical knowledge was the impetus for the two great programs of 19th century mathematics: rigorization and foundations.'"`UNIQ--ref-0000007F-QINU`"''"`UNIQ--ref-00000080-QINU`"' ::[...] There is a natural transition from the arithmetization of analysis that came to fruition in the 1870's to interest in the foundations of arithmetic that flowered in the 1880's. Dedekind played a major role in both of these programs. One goal of his was to examine set-theoretic procedures and their connections to the assumptions of logic.'"`UNIQ--ref-00000081-QINU`"' ==='"`UNIQ--h-15--QINU`"'Dedekind's "logic"=== In 1888, Dedekind published his major work on the foundations of arithmetic. For him, sets were logical objects and the corresponding notion was a fundamental concept of logic.'"`UNIQ--ref-00000082-QINU`"' In fact, he identified three basic logical notions:'"`UNIQ--ref-00000083-QINU`"' ::# ''object'' ("Ding") ::# ''set'' (or system, "System") ::# ''function'' (mapping, "Abbildung") He held these logical notions to be "fundamental for human thought" and yet, at the same time, "capable of being elucidated," in part, by "observing what can be done with them, including how arithmetic can be developed in terms of them." Dedekind, emphasizing that both sets and functions were to be defined extensionally, connected his three notions of logic as follows:'"`UNIQ--ref-00000084-QINU`"' * "sets are a certain kind of object ... about which we reason by considering their elements, and this is all that matters about sets." * functions, arbitrary ways of correlating the elements of sets, are yet not reducible to sets; neither are they presented by formulas nor representable in intuition (via graphs) nor decidable by formal procedures. Dedekind defined the concept of infinity using his three basic (undefined) notions of logic along with definable notions, such as subset, union, and intersection:'"`UNIQ--ref-00000085-QINU`"' ::a set of objects is ''infinite'' … if it can be mapped one-to-one onto a proper subset of itself. Dedekind's theory of sets (systems) … made an appeal (for the most part implicit -- but see below) to a principle of unrestricted comprehension, a principle according to which every condition defines a set.'"`UNIQ--ref-00000086-QINU`"' Dedekind accepted "general notions of set and function and the actual infinite." His notion of set was "unrestricted" in these three senses:'"`UNIQ--ref-00000087-QINU`"' # it involved an implicit acceptance of a general comprehension principle # it involved a universal set: "the totality of all things that can be objects of my thought" -- the ''Gedankenwelt'' # it involved consideration of arbitrary subsets of that totality -- a general ''Aussonderungsaxiom'' Dedekind's presentations proceeded informally. He presented his theory using some formal machinery, though without a great deal of precision and explicitness, but he provided no explicit list of axioms or rules of inference. '"`UNIQ--ref-00000088-QINU`"''"`UNIQ--ref-00000089-QINU`"' ::Dedekind ... has not [an] over-ruling passion … to demonstrate his position conclusively, and is content with the usual informal mathematical standard of rigour. As a result, [however,] his work has … mathematical elegance [absent in more formal presentations]. Frege himself commented on Dedekind's book as follows:'"`UNIQ--ref-0000008A-QINU`"' # his expressions set and belongs to "are not usual in logic and are not reduced to acknowledged logical notions" # an inventory of the logical laws taken by him as basic is nowhere to be found." The first remark is fair comment, but the second "is not altogether true" since "Dedekind does state some of the basic principles of set theory, ''which for him are part of logic''."'"`UNIQ--ref-0000008B-QINU`"' Such a view of sets leads rather to axiomatic set theory than to higher-order logics.'"`UNIQ--ref-0000008C-QINU`"' All of this is in keeping with Dedekind's ultimate purpose, which was not to axiomatize arithmetic, but to define mathematical notions in terms of logical ones.'"`UNIQ--ref-0000008D-QINU`"' Judged retrospectively, [Dedekind's] contributions belong more to modern mathematics and algebra than to mathematical logic narrowly construed.'"`UNIQ--ref-0000008E-QINU`"' ==='"`UNIQ--h-16--QINU`"'Two versions of Dedekind's principles=== Dedekind stated various principles satisfied by his notion of set.'"`UNIQ--ref-0000008F-QINU`"' "These principles are not explicitly introduced as axioms, but they nonetheless bear a close relation to the later axioms of set theory."'"`UNIQ--ref-00000090-QINU`"' The notion of set: different things $a, b, c$ can be considered from a common point of view … and we say that they form a set (system) # the set $S$ is the same as the set $T$ ($S = T$) when every element of $S$ is also an element of $T$" [and vice versa] # a set $A$ is said to be a subset (part) of a set $S$ when every element of $A$ is also [an] element of $S$. Unfortunately, in discussing this concept, Dedekind fails somewhat to distinguish the two notions member and subset, thereby "identifying an element $s$ with its unit set ${s}$."'"`UNIQ--ref-00000091-QINU`"' # the empty set $\emptyset$ is "wholly excluded" from Dedekind's logic for "certain reasons."'"`UNIQ--ref-00000092-QINU`"' # the Union of any arbitrary sets $A, B, C, …$ is defined # the Intersection of any sets $A, B, C, …$ is defined, with the proviso that the sets have at least one common element -- arising from the absence of the empty set! # the principle of Comprehension is not stated, but is assumed implicitly and is appealed to explicitly in the proof of one Dedekind's theorems, which contains the following text: "If we denote by 𝐄 the set of all things possessing the property 𝜠…." # a set is infinite (said to be Dedekind-infinite) if it can be put in 1-1 correspondence with (is similar to) a proper subset of itself; otherwise it is finite. - - - - - The above principles of Dedekind's logical framework "bear a remarkably close relationship to [the axioms of] modern axiomatic set theory," devised by Zermelo and set out below.'"`UNIQ--ref-00000093-QINU`"' The notes to the axioms show connections to Dedekind's Principles. ''Set'' is an undefined notion, introduced as follows: Set theory is concerned with a domain B of individuals, which we shall call simply objects and among which are the sets. * I ''Axiom of Extensionality'': sets are defined by their members -- Dedekind's Principle 1 * II ''Axiom of 'Elementary Sets'': **(a) ''the empty set'' -- Dedekind's Principle 3 **(b) ''the unit set'' $\{a\}$ of $a$ **(c) ''the unordered pair set'' $\{a,b\}$ of any objects $a, b$ * III ''Axiom of Separation'': This axiom and parts (b) and (c) of Axiom II replace an intuitive (naive) ''Axiom of Comprehension'', stating given any property there exists the set of all things having that property and (unfortunately) leading directly to Russell's paradox -- Dedekind's Principle 6 * IV ''Axiom of Power Set'': to every set $A$ there corresponds the set of all subsets of $A$, $P(A)$ -- Dedekind does not deal with infinite sets, so does not need the concept of Power Set * V ''Axiom of Union'': -- Dedekind's Principle 4. A special axiom for Intersection is not needed, since it follows from the other axioms. * VI ''Axiom of Choice'': needed to prove that sets ''ordinary-infinite'' are also ''Dedekind-infinite'' -- Dedekind's Principle 7 * VII ''Axiom of Infinity'': Dedekind's Principle 7 -- "essentially due to Dedekind" owing to his failure to prove the existence of an infinite set =='"`UNIQ--h-17--QINU`"'The predicate calculus== For millennia mathematics had been a science based on deductive logic. But no account of logic had ever been produced which was adequate for the purposes of mathematics.'"`UNIQ--ref-00000094-QINU`"' The logic of propositions, for example, was not powerful enough either to represent all types of assertions used specifically in mathematics or to express certain types of equivalence relationships that hold generally between assertions. Consider the following two examples:'"`UNIQ--ref-00000095-QINU`"' 1. Assertions such as $x \text{ is greater than } 1$, where $x$ is a variable, appear quite often in mathematical inferences. :However, the logic of propositions can deal with such assertions only when stated with an explicit value for $x$, such as $2 \text{ is greater than } 1$. Otherwise, such assertions are not propositions: until the value of $x$ is explicitly stated, they are neither true nor false. 2. The patterns involved in the following logical equivalences are common in inferences: ::* "Not all birds fly" is equivalent to "Some birds don't fly" ::* "Not all integers are even" is equivalent to "Some integers are not even" ::* "Not all cars are expensive" is equivalent to "Some cars are not expensive" :However, the logic of propositions treats the two assertions in such equivalences independently. ::Let $P$ represent "Not all birds fly" and $Q$ represent "Some integers are not even" ::There is no general mechanism in the logic of propositions to determine whether or not $P$ is equivalent to $Q$. Each such equivalence must be listed individually to be used in inferencing. :Instead, we want to have a rule of inference that covers all these equivalences collectively and can be used when necessary. In other words, we need a more powerful logic to deal with these assertions. ==='"`UNIQ--h-18--QINU`"'Peirce's logic of quantifiers=== As a way of affirming his intention to develop a theory in which ''logic becomes calculation'', Peirce defined quantifiers to emphasize their analogy with arithmetic operations:'"`UNIQ--ref-00000096-QINU`"' ::Here, in order to render the notation as iconical as possible, we may use $\sum$ for the quantifier '''Some''', suggesting a sum, and $\prod$ for the quantifier '''AII''', suggesting a product. Thus $\sum_i x_i$ means that $x$ is true of some one of the individuals denoted by $i$ or ::::$\sum_i x_i = x_i + x_j + x_k + \text{etc.}$ ::In the same way, ::::$\prod_i x_i = x_i x_j x_k \text{etc.}$ ::If $x$ is a simple relation, ::::$\prod_i \prod_j x_{i, j}$ means that every $i$ is in this relation to every $j$, ::::$\sum_i \prod_j x_{i, j}$ means that some one $i$ is in this relation to every $j$. Applying Peirce's quantifiers as defined above to his logic of relations, we can then write as follows:'"`UNIQ--ref-00000097-QINU`"' ::for the relation $i$ $loves$ $j$ ::::$loves_{i,j}$ ::for the statement using this relation "Everybody $loves$ somebody" ::::$\prod_i \sum_j loves_{i,j}$ ==='"`UNIQ--h-19--QINU`"'Frege's predicate logic=== Frege well knew of the inadequacies of propositional logic. Further, he understood that the various constructions of the real numbers and the associated introduction of infinite sets into mathematics rested on two pillars:'"`UNIQ--ref-00000098-QINU`"' # the procedures of set theory # the assumptions of logic Having determined that set-theoretic procedures were somehow "founded in logic," he sought to answer this question: What, then, were the basic notions of logic? As we have seen above, Boole developed his algebraic logic as a means by which ''deduction becomes calculation''. In 1879, Frege published his "axiomatic-deductive" logic, which stood Boole's purpose on its head:'"`UNIQ--ref-00000099-QINU`"' * Frege's goal: "to establish ... that arithmetic could be reduced to logic" and, thus, to create a logic by means of which ''calculation becomes deduction'' * Frege's program: to develop arithmetic as an axiomatic system and show that all the axioms were truths of logic An essential point of Frege's project has been summarized as an effort "to get beyond the 'deductive' reasoning of syllogisms [of classical logic] to all the 'inductive' rules [used in mathematical proofs, which] require writing down, not just true … statements about [specific] numbers (etc), but reasoning about collections of numbers together." '"`UNIQ--ref-0000009A-QINU`"' There is a further, perhaps more essential point to Frege's project. Those working on the mid-19th-century arithmetization of analysis sought a precise manner of defining fundamental concepts of mathematics, such as limit, convergence, and continuity. In developing his logic, Dedekind sought a precise way of stating the results of his investigations into the nature of set and of numbers. Frege went further, however, seeking "a precise way not only of stating results, but also of proving them." His insight was to realize "the difficulties of doing so using ordinary language, which was ... imprecise and ambiguous."'"`UNIQ--ref-0000009B-QINU`"' In 1879, Frege published a system of predicate logic that proved sufficient for the formalisation of mathematics. He achieved this by abandoning the Subject-Predicate analysis of sentences used in Aristotle's syllogistic logic.'"`UNIQ--ref-0000009C-QINU`"' The inspiration for Frege's predicate logic came from the mathematical concept of a function. He saw that the predicates in the statements of a syllogism could be expressed as concepts with variables that take arguments. The predicate "is Mortal" can be expressed as a concept that takes one argument, $\operatorname{Mortal}(x)$. The predicate "is a Teacher of" can be expressed as a concept that takes two arguments, $\operatorname{Teacher}(x, y)$. Viewed as such, predicates behave like functions in this sense:'"`UNIQ--ref-0000009D-QINU`"' ::when specific values (names) replace the concept variables, the predicate is transformed into a statement that is true or false. Frege strongly urged the adoption of this functional interpretation of predicates:'"`UNIQ--ref-0000009E-QINU`"' ::Logic has hitherto always followed ordinary language and grammar too closely…. I believe that the replacement of the … subject and predicate by argument and function, respectively, will stand the test of time. The greatest advance of Frege's logic over Aristotle's was its generality. It could handle all of the following:'"`UNIQ--ref-0000009F-QINU`"' ::* conjunctions, disjunctions, conditionals, and biconditionals of propositional logic ::* the logical equivalences involving negation described above ::* all combinations of quantifiers (All, Some, No, Not All) ::* relations, i.e., predicates involving two (or more) subjects In addition, Frege "drew attention to numerous important distinctions, e.g. between $x$ and $\{x\}$ and between $\in$ and $\subseteq$, which distinctions Dedekind failed somewhat to make in his theory of sets.'"`UNIQ--ref-000000A0-QINU`"' Using his logic, Frege defined with precision a formal deductive system, for which reason above all others he is nowadays commonly regarded as the founding father of modern logic.'"`UNIQ--ref-000000A1-QINU`"' Unfortunately, Frege symbolized statements using a far from intuitive 2-dimensional graphical method (Begriffsschrift -- 'concept-script' or 'ideography'). Here are his symbolizations for the four Aristotelian syllogistic sentence forms: '''A''', '''E''', '''I''', and '''O''':'"`UNIQ--ref-000000A2-QINU`"' ::{| class="wikitable" |- ! A: All a that are X are P !! E: All a that are X are not P !! I: Not all a that are X are not P !! O: Not all a that are X are P |- | style="text-align: center;" | [[File:Fregea1.jpg]] || style="text-align: center;" | [[File:Fregee1.jpg]] || style="text-align: center;" | [[File:Fregei1.jpg]] || style="text-align: center;" | [[File:Fregeo1.jpg]] |} It is thought that Frege's cumbersome symbolism was what kept his logic from being adopted initially. Eventually, Frege's logic was combined with Peano's more intuitive notation, to create the predicate calculus used today. We will only mention briefly here what will be discussed farther on, namely, two essential elements of Frege's logic that bear on his theory of arithmetic:'"`UNIQ--ref-000000A3-QINU`"''"`UNIQ--ref-000000A4-QINU`"' :1. ''concepts'' are the basic notions of logic, while ''sets'', which Frege defines as the extensions of predicates, are derivative notions: thus the set $\{x:P(x)\}$ is the extension of the predicate $P(x)$ :This view of sets is a broadening of Cantor's early (restricted) view of naive sets, which were defined only for objects of a given mathematical domain. :2. for any logically definable predicate $P(x)$, we can form the set $\{a:P(a)\}$ :This ''naive comprehension principle'' has an analog stated implicitly in the Principles of Dedekind's theory of sets. ==='"`UNIQ--h-20--QINU`"'A note on notation=== As set out above, somewhat contemporaneous with and quite independent of Frege's invention of quantifiers for his axiomatic logic, C. S. Peirce invented quantifiers for Boole's algebraic logic or, more precisely, for an algebra of relations that extended Boole's logic. Frege disagreed with the assertion of Peirce that mathematics and logic are clearly distinct. To the contrary, as we have noted, Frege's view was that mathematics was reducible to logic or, more to the point, derivable from logic.'"`UNIQ--ref-000000A5-QINU`"''"`UNIQ--ref-000000A6-QINU`"' It is an irony that, though Frege invented his "logic of quantifiers" in order to support this view, his cumbersome 2-dimensional notation led to his invention being overlooked at the time. It was a linear notation somewhat similar to Peirce's that was adopted and that we use today. Here is an illuminating (and somewhat amusing) chronology of notational variants in the predicate calculus:'"`UNIQ--ref-000000A7-QINU`"' * In 1879, Frege developed his ''Begriffsschrift'' (concept writing), but for the next 30 years, his work was largely ignored. * In 1880, Peirce began to use the symbols $\prod$ and $\sum$, which he called quantifiers. * In 1885, Peirce added rules for quantifiers to Boolean algebra and published complete rules of inference for first-order logic. * In Germany (1890-95), Schröder adopted Peirce's notation, which became the standard for 20+ years. * In Italy, the logicians followed Peano, who had declared Frege's notation to be unreadable. * In England, Russell praised Frege's logic, but adopted the Peirce-Peano notation, which came to be called Peano-Russell notation. The following table summarizes the symbolism of the Boole/Peirce algebra of logic:'"`UNIQ--ref-000000A8-QINU`"' ::{| class="wikitable" |- ! Operation !! Symbol !! Explanation |- | Disjunction || style="text-align: center;" | $+$ || Logical sum |- | Conjunction || style="text-align: center;" | $\times$ || Logical product |- | Negation || style="text-align: center;" | $-$ || $−1=0$ and $−0=1$ |- | Implication || style="text-align: center;" | $\prec$ || Equal or less than |- | Existential Quantifier || style="text-align: center;" | $\sum$ || Iterated sum |- | Universal Quantifier || style="text-align: center;" | $\prod$ || Iterated product |} The top three lines of the table are Boole's. For his logical algebra, he used $1$ for truth and $0$ for falsehood, and he chose the symbols $+$, $\times$, and $−$ to represent ''disjunction'', ''conjunction'', and ''negation''. The bottom three lines of the table are Peirce's innovations:'"`UNIQ--ref-000000A9-QINU`"' * ''Implication'': Peirce observed that if $p \text{ implies } q$, then $q$ must always be true when $p$ is true, but $q$ might also be true for some reason independent of $p$. Therefore, the truth value of $p$ is ''always less than or equal to'' the truth value of $q$. Instead of using the symbol $≤$, which combines two operators, Peirce invented the claw symbol $\prec$ because it suggests a single, indivisible operation. * ''Existential quantifier'': In Boole's algebra, $1+1=1$. Therefore, Peirce adopted $\sum$ to indicate a logical summation of any number of terms, which would be true ''if at least one of the terms happened to be true''. * ''Universal quantifier'': In Boole's algebra, $1 \times 1 = 1$, so Peirce adopted $\prod$ to indicate a logical product of any number of terms, which would only be true ''if every one of the terms happened to be true''. It has been suggested that Peirce in algebraic logic and Frege in axiomatic logic did not so much invent the notion of quantifier, but rather separated and freed that notion from two tethers:'"`UNIQ--ref-000000AA-QINU`"' * from the notion of predicate in Aristotle's syllogism, on the one hand * from the connectives in Boole's algebra of logic, on the other After Frege and Peirce put the logic of predicates, variables and quantifiers into the language of logic, it became possible to apply this language to questions in the foundations of arithmetic, in particular, and of mathematics, generally .'"`UNIQ--ref-000000AB-QINU`"' =='"`UNIQ--h-21--QINU`"'Axiomatic development of arithmetic== The modern theory of arithmetic was developed in the last decades of the nineteenth century. The people most closely associated with this development and the dates of their initial publications are as follows:'"`UNIQ--ref-000000AC-QINU`"' * Gottlob Frege (1884) * Richard Dedekind (1888) * Giuseppe Peano (1889) Though their published works have much in common, we judge from their statements that each completed his own work before becoming aware of the work of the others. ==='"`UNIQ--h-22--QINU`"'Grassmann's and Peirce's contributions=== A great deal of the work involved in axiomatizing arithmetic was done in the decades before Frege, Dedekind, and Peano published. As much as 90% of that work is said to have been done by one person, Hermann Grassmann. Certainly Peano knew of and acknowledged his use of Grassmann's work, which was published in 1861 and included the following results:'"`UNIQ--ref-000000AD-QINU`"''"`UNIQ--ref-000000AE-QINU`"' * recursive definitions of addition and multiplication from one single argument operation, i.e. the successor operation $x+1$: ::$x+0=x$; $x+(y+1)=(x+y)+1$; ::$x \times 0=0$; $x \times (y+1)=(x \times y)+x$. * a definition of the induction principle, stated as follows in modern terminology: ::Let variables $x, y, …$ range over natural numbers, and let :::$0$ denote the number "zero" :::$Sx$ denote the operation $x+1$ :::$F$ range over sets of natural numbers. ::$0 \in F \wedge \forall x(x \in F \Rightarrow Sx \in F)) \Rightarrow \forall x(x \in F)$ * a demonstration that the commutative law can be derived from the associative law by means of this induction principle. In 1881, Peirce published a set of axioms of number theory. His purpose was to use his quantified logic of relations to construct the system of natural numbers based on definitions and axioms. In his own words, he published his axioms for natural numbers to establish that "elementary propositions concerning number ... are strictly syllogistic consequences from a few primary propositions."'"`UNIQ--ref-000000AF-QINU`"' Starting from his definition of ''finite set'', Peirce's axioms are (in modern terminology) as follows:'"`UNIQ--ref-000000B0-QINU`"' ::Given the following: ::* a set $N$ ::* $R$, a relation on $N$ ::* $1$, an element of $N$ ::* definitions of minimum, maximum, and predecessor with respect to $R$ and $N$ ::Peirce's axioms: # $N$ is partially ordered by $R$. # $N$ is connected by $R$. # $N$ is closed with respect to predecessors. # $1$ is the minimum element of $N$; $N$ has no maximum. # Mathematical induction holds for $N$. Peirce's axioms for the natural numbers start from finite sets, but they are nonetheless equivalent both to the defining conditions stated by Dedekind and to the axioms developed by Peano.'"`UNIQ--ref-000000B1-QINU`"' ==='"`UNIQ--h-23--QINU`"'Frege's theory of arithmetic=== Frege, by virtue of his work creating predicate logic, is one of the founders of modern (mathematical) logic. His view was that mathematics is reducible to logic. His major works published with the goal of doing this are these:'"`UNIQ--ref-000000B2-QINU`"' * in 1879 -- ''Begriffsschrift'', defining his "axiomatic-deductive" predicate calculus for the ultimate purpose of proving the basic truths of arithmetic "by means of pure logic." * in 1884 -- ''Die Grundlagen der Arithmetik'', using his predicate calculus to present an axiomatic theory of arithmetic. * in 1893/1903 -- ''Die Grundgesetze der Arithmetik'', presenting formal proofs of number theory from an intuitive collection of axioms. As we have seen, Boole's developed his algebra of logic as a means by which ''deduction becomes calculation''. Frege's predicate calculus in the ''Begriffsschrift'' stood Boole's purpose on its head:'"`UNIQ--ref-000000B3-QINU`"' * Frege's goal: "to establish ... that arithmetic could be reduced to logic" and, thus, to create a logic by means of which ''calculation becomes deduction'' * Frege's program: to develop arithmetic as an axiomatic system such that all the axioms were truths of logic Driven by "an over-ruling passion to demonstrate his position conclusively" and not "content with the usual informal mathematical standard of rigour," Frege's exposition in ''Grundgesetze'' is characterized by a great degree by precision and explicitness.'"`UNIQ--ref-000000B4-QINU`"' He himself tells us why this is so:'"`UNIQ--ref-000000B5-QINU`"' ::[T]he fundamental propositions of arithmetic should be proved…with the utmost rigour; for only if every gap in the chain of deductions is eliminated with the greatest care can we say with certainty upon what primitive truths the proof depend. Frege gave the following reason for developing his logic as an axiomatic system:'"`UNIQ--ref-000000B6-QINU`"' ::Because we cannot enumerate all of the boundless number of laws that can be established, we can obtain ''completeness'' only by a search for those [laws] which, potentially, imply all the others. Frege also commented on the role of proof in mathematics:'"`UNIQ--ref-000000B7-QINU`"' ::The aim of proof is not merely to place the truth of a proposition beyond all doubt, but also to afford us insight into the ''dependence'' of truths upon one another. Frege identified as the ''kernel'' of his system the axioms (laws) of his logic that potentially imply all the other laws. His statements above imply that he thought his system to be complete and his axioms to be independent. He did not, however, provide precise definitions of completeness and independence nor did he attempt a proof that his system was complete and his axioms independent. Early in his first book on the foundations of arithmetic, Frege established his purpose:'"`UNIQ--ref-000000B8-QINU`"' ::[I]t is above all $Number$ which has to be ''either defined or recognized as indefinable''. This is the point which the present work is meant to settle. Frege began the introduction of numbers into his logic by defining what is meant by saying that two $Numbers$ are equal:'"`UNIQ--ref-000000B9-QINU`"''"`UNIQ--ref-000000BA-QINU`"' ::two concepts $F$ and $G$ are equal if the things that fall under them can be put into one-one correspondence From this he arrives at the notion that "a $Number$ is a set of concepts": * §72 the $Number$ that belongs to the concept $F$ is the extension of the concept "equal to the concept $F$" :and then continues as follows by defining the expression ::::$n$ is a $Number$ :to mean ::::there exists a concept such that $n$ is the $Number$ that belongs to it. * §73 he draws this inference ::::the concept $F$ is equal to the concept $G$ :implies ::::the $Number$ belonging to the concept $F$ is identical to the $Number$ belonging to the concept $G$ The natural numbers can be used as ''ordinals'' and as ''cardinals''. * ordinal numbers are used to count elements and place them in a succession: they correspond to expressions such as ''first'', ''second'', ''third''... and so forth. * cardinal numbers are used to count how many elements of some kind there are: ''one'' cat, ''two'' dogs, ''three'' horses, and so forth. As developed above, Frege interpreted statements about natural numbers to be statements about concepts. This interpretation stemmed from his understanding that natural numbers themselves were essentially ''cardinals'', "contrary to the general tendency in the late nineteenth century on foundations of arithmetic."'"`UNIQ--ref-000000BB-QINU`"' Hence, Frege's definition of numbers was based on their uses as cardinals.'"`UNIQ--ref-000000BC-QINU`"' Frege continued as follows: * §74 he defines the $Number$ $0$ as ::::the $Number$ that belongs to the concept "not identical with itself" * §75 he immediately clarifies this, stating ::::Every concept under which no object falls is equal to every other concept under which no object falls, and to them alone. :and therefore ::::$0$ is the $Number$ which belongs to any such concept, and no object falls under any concept if the number which belongs to that concept is $0$. From this point (§76) onwards, Frege discussed the succession of natural numbers, starting from the number zero so defined, and (eventually) proved the five Peano axioms of arithmetic.'"`UNIQ--ref-000000BD-QINU`"' * §76 he defines the $Successor$ relation ::::$n$ follows in the series of $Numbers$ directly after $m$ :to mean ::::there exists a concept $F$ and an object falling under it, $x$, such that ::::::the $Number$ belonging to the concept $F$ is $n$ ::::and ::::::the $Number$ belonging to the concept "falling under $F$ but not equal to $x$" is $m$ * §77 he defines the $Number$ $1$ as ::::"the $Number$ belonging to the concept 'identical with $0$'" :from which it follows that ::::$1$ is the $Number$ that follows directly after $0$ * §78-81 he proves or gives a proof sketch for several propositions regarding the $Successor$ relation, using definitions of $series$ and $following$ $in$ $a$ $series$ from his earlier work of 1879 ::* the $Successor$ relation is 1-1 ::* every $Number$ except $0$ is a $Successor$ ::* every $Number$ has a $Successor$ * §82 he outlines a proof that there is no last member in the series of $Numbers$ * §83 he provides a definition of finite Number, noting that no finite Number follows in the series of natural numbers after itself * §84 he notes that the $Number$ which belongs to the concept 'finite N$umber$' is an infinite $Number$. Central to all of this work was a distinction that Frege was developing, but only finally published in 1892 and incorporated in the ''Grundgesetze'', namely, that every concept, mathematical or otherwise, had two important, entirely distinct aspects:'"`UNIQ--ref-000000BE-QINU`"''"`UNIQ--ref-000000BF-QINU`"' # ''Sinn'': a "meaning" or "sense" or "connotation" # ''Bedeutung'': an "extension" or "reference" or "denotation" This distinction of Frege's is the basis of what Gödel (many years later) characterized as the ''dichotomic conception'':'"`UNIQ--ref-000000C0-QINU`"' ::Any well-defined concept (property or predicate) $P(x)$ establishes a dichotomy of all things into those that are $P$s and those that are non-$P$s. :In other words, ::a concept partitions $V$ (the universe of discourse) into two classes: the class $\{ x : P(x) \}$ and its absolute complement, the class $\{ x : \neg P(x) \}$. Underlying this notion are two key assumptions: # the existence of a ''Universal Set'', $V$ -- what we have seen as Dedekind's ''Gedankenwelt'' # the unrestricted principle of ''Comprehension'' -- ''any'' well-defined property determines, a set. For "naïve" set theory, these two assumptions are equivalent and either one of them suffices to derive the other: * to derive ''Universal Set'' from ''Comprehension'': ::::replace $P(x)$ by a truism, such as the property $x = x$. * to derive ''Comprehension'' from the ''Universal Set'': ::assume an all-encompassing set $V$, ::::note that any part of $V$ is also a set, ::::and that any well-defined concept $P(x)$ defines a subset of $V$, ::therefore the set $\{ x : P(x) \}$ exists! To these two assumptions, add Dedekind's principle of ''Extensionality''. Frege intended the ''Grundgesetze'' to be the implementation of his program to demonstrate "every proposition of arithmetic" to be "a [derivative] law of logic." In this work of 19 years duration, there was no explicit appeal to an ''unrestricted'' principle of Comprehension. Instead, Frege's theory of arithmetic appealed to Comprehension by virtue of its symbolism, according to which for any predicate $P(x)$ (concept or property) one can form an expression $S = \{ x : P(x)\}$ defining a set. Frege's theory assumes that (somehow) there is a mapping which associates an object (a set of objects) to every concept, but he does not present comprehension as an explicit assumption. All of this is in contrast to the use of ''restricted'' predicates in Cantor's early theory of sets.'"`UNIQ--ref-000000C1-QINU`"''"`UNIQ--ref-000000C2-QINU`"' ==='"`UNIQ--h-24--QINU`"'Dedekind's theory of numbers=== Dedekind's work through 1872 was concerned with the rigourization and arithmetization of analysis. More specifically, he focused on providing a rigorous definition of real numbers and of the real-number continuum upon which to establish mathematical analysis.'"`UNIQ--ref-000000C3-QINU`"' His subsequent work in foundations was based on the further thought that the concepts and the rules of arithmetic also needed rigourization, which he sought to provide using logic and set theory.'"`UNIQ--ref-000000C4-QINU`"' A goal for Dedekind (as for Frege, though in a somewhat different sense) was to "reduce" the natural numbers and arithmetic to logic and set theory. A second goal (again for both of them) was to examine set-theoretic procedures and establish to what extent they themselves were founded in logic. "But then, what are the basic notions of logic?"'"`UNIQ--ref-000000C5-QINU`"' The ultimate basis of a mathematician's knowledge was, according to Dedekind, the clarification of the concept of natural numbers in a non-mathematical fashion, which involves this two-fold task:'"`UNIQ--ref-000000C6-QINU`"' # to define numerical concepts (natural numbers) through logical ones # to characterize mathematical induction (the passage from n to n+1) as a logical inference. The "clarification" that Dedekind provided was in answer to questions that he himself posed:'"`UNIQ--ref-000000C7-QINU`"' * What are the mutually independent fundamental properties of the sequence $N$, that is, those properties that are not derivable from one another but from which all others follow? * How should we divest these properties of their specifically arithmetic character so that they are subsumed under more general notions and under activities of the understanding ''without'' which no thinking is possible at all, but ''with'' which a foundation is provided for the reliability and completeness of proofs and for the construction of consistent notions and definitions? Dedekind developed what has been called a "set-theoretic" presentation of the natural numbers, which (in a modern formulation) is captured in the following four ''conditions'':'"`UNIQ--ref-000000C8-QINU`"' ::A ''simply infinite'' set $N$ has a distinguished element $e$ and an ordering mapping $ϕ$ such that ::# $ϕ(N) ⊆ N$ ::# $e \notin ϕ(N)$ ::# $N = ϕ0(e)$, i.e. $N$ is the $ϕ-chain$ of the unitary set $\{e\}$ ::# $ϕ$ is an injective mapping from $N$ to $N$, i.e. if $ϕ(a) = ϕ(b)$ then $a = b$. The contrast between Dedekind's conditions and Peano's axioms has been described as follows:'"`UNIQ--ref-000000C9-QINU`"' * Conditions 2 and 4 are easily related to Peano's axioms, which tend to impose conditions on the behaviour of … the natural numbers, and the operations on them. * Conditions 1 and 3, however, are "set-theoretic" in character, "establishing structural conditions on subsets of the (structured) sets [and/or] on the behaviour of relevant maps." Dedekind's intention here was not to axiomatize arithmetic, but to give an "algebraic" characterization of natural numbers as a mathematical structure.'"`UNIQ--ref-000000CA-QINU`"' Dedekind then introduced the natural numbers as follows:'"`UNIQ--ref-000000CB-QINU`"' # he proved that every infinite set contains a ''simply infinite'' subset # he showed (in contemporary terminology) that any two simply infinite systems ... are isomorphic (so that the axiom system is categorical) It is interesting to note how Dedekind's approach contrasts with Peirce's and Frege's. With Dedekind we have a theory of numbers, while with Peirce we have an axiomatization of number theory. Further, in defining natural numbers, Dedekind started from infinite sets rather than finite sets, and was explicitly and specifically concerned with the real number continuum, that is, with infinite sets.'"`UNIQ--ref-000000CC-QINU`"' With Frege, too, we have an axiomatization of number theory. Further, Frege considered the natural numbers 1, 2, 3, … to be essentially cardinals, which are used to count how many objects of some kind there are, while Dedekind considered natural numbers to be essentially ordinals. Any set meeting Dedekind's four conditions for a simple infinity will consist of a first element $1$, a second element $f(1)$, a third $f(f(1))$, then $f(f(f(1)))$, and so on.'"`UNIQ--ref-000000CD-QINU`"' Thus, in his definition of a simple infinity, Dedekind captured the ordinality of the natural numbers.'"`UNIQ--ref-000000CE-QINU`"' He subsequently provided an explanation of how to derive the cardinality of the natural numbers, using initial segments of the number series as tallies:'"`UNIQ--ref-000000CF-QINU`"' ::for any set we can ask which such segment, if any, can be mapped one-to-one onto it, thus measuring its cardinality. (A set turns out to be finite, in the sense defined above, if and only if there exists such an initial segment of the natural numbers series.) ==='"`UNIQ--h-25--QINU`"'Peano's axioms of arithmetic=== From 1891 until 1906, Peano and his colleagues published substantial amounts of formalized mathematics in his journal ''Rivista di Matematica''. Their objective was not to reduce mathematics to a logical foundation, but to rewrite mathematics "in a formal framework," that is, to use a formal notation as an aid to precision. Peano's substantial interest in stating mathematical results and arguments precisely rose out of his teaching experience. '"`UNIQ--ref-000000D0-QINU`"' Peano knew of and studied the work of both Grassmann and Frege "before he began his ''Arithmetices principia''. Along with them, he believed that ordinary language -- and therefore any mathematics that was explained in it -- was too ambiguous. Peano's goals were these:'"`UNIQ--ref-000000D1-QINU`"' # to set up a solid system of arithmetic # to improve logic symbols and notation # to establish axioms that would serve as the basis for all arithmetic Thus, ::"Peano Arithmetic does not tell us what the numbers are… Rather, [it] provides a means of deducing the various arithmetic facts about them [, telling] us how the numbers are inter-related."'"`UNIQ--ref-000000D2-QINU`"' Like Dedekind, Peano accepted "class" as a logical notion, but, unlike Frege, Peano did not think that number could be defined in terms of logical notions.'"`UNIQ--ref-000000D3-QINU`"' In 1889, Peano published his first system of axioms for the natural numbers, in which he defined "every sign ... except ... four":'"`UNIQ--ref-000000D4-QINU`"''"`UNIQ--ref-000000D5-QINU`"' # ''number'' (positive integer) # ''unity'' # ''successor'' # ''equality'' (of numbers) Peano wrote his axioms, definitions, and proofs using the symbols that he defined in the preface to booklet. He intended the structure so created to be sufficient to derive "every result in arithmetic."'"`UNIQ--ref-000000D6-QINU`"' In 1891, Peano published a 2nd, simplified system of axioms, using only three undefined terms: $\mathbb{N}$ ($number$), $1$ ($One$), and $a^+$ (the $successor$ of $a$, where $a$ is a $number$).'"`UNIQ--ref-000000D7-QINU`"' These may be stated informally as follows: # $One$ is a $number$ # The sign $^+$ placed after a $number$ $a$ produces a $number$ $a^+$ # If $a$ and $b$ are two $numbers$, and if their $successors$ are equal, then they are also equal # $One$ is not the $successor$ of any $number$ # If $s$ is a class containing $One$, and if the class made up of the $successors$ of $s$ is contained in $s$, then every $number$ is contained in the class $s$. In 1898, Peano published a third system of axioms in which the undefined term $0$ ($Zero$) replaced the term $1$ ($One$). It is this system of axioms that is commonly known today as [[Peano axioms]] and that can be stated informally as follows:'"`UNIQ--ref-000000D8-QINU`"' # $Zero$ is a $number$ # The $successor$ of any $number$ is another $number$ # There are no two $numbers$ with the same $successor$ # $Zero$ is not the $successor$ of a $number$ # Every property of $Zero$, which belongs to the $successor$ of every $number$ with this property, belongs to all $numbers$ Note particularly the language of induction axiom 5, in which talk of "classes" is replaced by talk of "properties". This change stems from Frege's notion (discussed above) about the relationship between the ''extension'' and the ''meaning'' of a concept. ::The extension of a mathematical predicate (property, concept) $P(x)$ is just $\{x:P(x)\}$, the collection of everything for which the predicate is true. In a remark immediately following his 1898 statement of the axioms, Peano noted this:'"`UNIQ--ref-000000D9-QINU`"' ::These primitive propositions . . . suffice to deduce all the properties of the numbers that we shall meet in the sequel. There is, however, an infinity of systems which satisfy the five primitive propositions. . . . All systems which satisfy the five primitive propositions are in one-to-one correspondence with the natural numbers. The natural numbers are what one obtains by abstraction from all these systems; in other words, the natural numbers are the system which has all the properties and only those properties listed in the five primitive propositions. ([14], p. 218). Finally, in 1901, Peano added another axiom, which he numbered axiom 0, to the five axioms noted above.'"`UNIQ--ref-000000DA-QINU`"' :0. The (natural) $numbers$ form a class Various and diverse modern formalizations of Peano's axioms exist. Typical differences among them involve the following:'"`UNIQ--ref-000000DB-QINU`"' * alternate orders of stating the axioms, especially with the axiom of induction stated last * alternate formulations of the axiom of induction itself, for example: ::::* $\forall S \in Sets : (0 \in S \wedge \forall n : n \in S \to n' \in S) \to \mathbb{N} \subset S$ ::::* $\forall P \in Predicates : (P(0) \wedge \forall n : P(n) \to P(n')) \to \forall n : P(n)$ All of Peano's formulations of the induction axiom (and those above) are second-order statements. Peano did not discuss the consistency of his axioms. He did, however, examine the independence of his axioms, that is, whether or not all of his axioms are actually needed, by defining, for each axiom, a set for which the axiom being considered was false, but for which the other axioms remained true, as follows:'"`UNIQ--ref-000000DC-QINU`"''"`UNIQ--ref-000000DD-QINU`"' An alternate way of demonstrating the independence of axioms is as follows:'"`UNIQ--ref-000000DE-QINU`"' ::To prove that every axiom is needed to define the natural numbers, we need to remove each one from the set of axioms and demonstrate that the remaining axiom set has models that are not isomorphic to the natural numbers. ==='"`UNIQ--h-26--QINU`"'Another note on notation=== Peano and his colleagues were seeking to "rewrite" mathematics in a logical framework and needed logical symbols that could be freely mixed with mathematical symbols in formulas. Peano therefore replaced Peirce's logical symbols with new ones, occasionally turning letters upside down and/or backwards to form them. The following table lists Peirce's symbols and Peano's replacements:'"`UNIQ--ref-000000DF-QINU`"' ::{| class="wikitable" |- ! Operation !! Peirce !! Explanation !! Peano !! Explanation |- | Disjunction || style="text-align: center;" | $+$ || Logical sum || style="text-align: center;" | $\lor$ || v for ''vel'' |- | Conjunction || style="text-align: center;" | $\times$ || Logical product || style="text-align: center;" | $\land$ || Upside down v |- | Negation || style="text-align: center;" | $-$ || $−1=0$ and $−0=1$ || style="text-align: center;" | $\sim$ || Curly minus sign |- | Implication || style="text-align: center;" | $\prec$ || Equal or less than || style="text-align: center;" | $\supset$ || C for ''consequentia'' |- | Existential Quantifier || style="text-align: center;" | $\sum$ || Iterated sum || style="text-align: center;" | $\exists$ || E for ''existere'' |- | Universal Quantifier || style="text-align: center;" | $\prod$ || Iterated product || style="text-align: center;" | $( )$ || O for ''omnis'' |} ==='"`UNIQ--h-27--QINU`"'Whose axioms are they -- anyway?=== A terminological dispute has arisen over the use of "Peano axioms" to designate the axioms of arithmetic. This designation has been called into question by some since, in developing his axioms, Peano himself acknowledged the following:'"`UNIQ--ref-000000E0-QINU`"' * he made extensive use of Grassmann's work * he "borrowed" the axioms themselves from Dedekind Indeed, as Peano himself stated it:'"`UNIQ--ref-000000E1-QINU`"' ::... I used the book of H. Grassmann … and the recent work by R. Dedekind…. Certainly, as we have seen above, in 1888, one year before Peano's work, Dedekind did publish a similar system of axioms and obtained similar results. As a result, some have argued that Dedekind deserves at least as much (if not more) credit than Peano for the postulates on the natural numbers, referring to them as Dedekind-Peano axioms:'"`UNIQ--ref-000000E2-QINU`"' ::What it means to be "simply infinite" is captured in four 'Dedekindian' conditions, which are "a notational variant of Peano's axioms for the natural numbers" and, hence, "are thus properly called the Dedekind-Peano axioms." As to whether Peano knew of and, therefore, may have borrowed something from Dedekind work, there are conflicting claims: * one source claims that Peano was completely unaware of Dedekind's book ''until after his own was published''.'"`UNIQ--ref-000000E3-QINU`"' * a more nuanced claim is that Peano read Dedekind's essay only ''as his own book was going to press'' and stated in his own preface that he had found Dedekind's essay "useful"… meaning (perhaps) that he found confirmation of "the independence of the primitive propositions from which I started."'"`UNIQ--ref-000000E4-QINU`"' The argument has been made that the "much more clear and more thorough" nature of Peano's work is reason enough for remembering him as the "creator" of the axioms, with the following suggested as deficiencies in Dedekind's work:'"`UNIQ--ref-000000E5-QINU`"' ::* Dedekind only writes three axioms, equivalent to Peano's 1889 axioms 1, 7, and 9 ::* he omits any discussion of the equality relation ::* he does not explicitly define the successor function in an axiom In the same regard, Dedekind's notation for the successor of a number $a$ as $a'$ is less preferred than Peano's notation of $a +1$, which makes the definition more obvious.'"`UNIQ--ref-000000E6-QINU`"' Finally, support for nominating the axioms as Peano's rather than Dedekind's has been based on the differences in their purposes: * "Peano's primary interest was in axiomatics": he neither developed nor used his logic for the purpose of "reducing" mathematical concepts to logical concepts and, indeed, "he denied the validity of such a reduction."'"`UNIQ--ref-000000E7-QINU`"' * It is correct to call the axioms Peano's rather than Dedekind's, because Peano was not trying to define the primitive notions of arithmetic, but rather to characterize them axiomatically; whereas Dedekind was not trying to axiomatize arithmetic, but rather to define arithmetical notions in terms of logical ones.'"`UNIQ--ref-000000E8-QINU`"' This last point may be the most telling, suggesting as it does that Dedekind himself would not have wanted his 'Dedekindian' conditions to be called axioms. =='"`UNIQ--h-28--QINU`"'Cantor's general theory of sets== It was a widespread belief in the late 19th century that pure mathematics was nothing but an elaborate form of arithmetic and that the "arithmetization" of mathematics had brought about higher standards of rigor. This belief led to the idea of grounding all of pure mathematics in logic and set theory. The implementation of this idea which proceeded in two steps:'"`UNIQ--ref-000000E9-QINU`"' # the establishment of a theory of real numbers (arithmetization of analysis) # the definition of the natural numbers and the axiomatization of arithmetic Infinite sets had been needed for an adequate definition of important mathematical notions, such as limit and irrational numbers. It was this initial use that led Cantor himself to begin studying infinite sets "in their own right."'"`UNIQ--ref-000000EA-QINU`"' In 1883, Cantor began his general theory of sets with the publication of ''Foundations of a General Theory of Manifolds'' (the ''Grundlagen''). Among other things, in this paper he made the interesting and somewhat self-justifying claim of the autonomy of pure mathematics: * pure mathematics may be concerned with systems of objects which have no known relation to empirical phenomena at all'"`UNIQ--ref-000000EB-QINU`"' * any concepts may be introduced subject only to the condition that they are free of contradiction and defined in terms of previously accepted concepts'"`UNIQ--ref-000000EC-QINU`"' In the ''Grundlagen'', there was a significant change in Cantor's conception of a set, which he defined as follows:'"`UNIQ--ref-000000ED-QINU`"' ::any multiplicity which can be thought of as one, i.e. any aggregate (''inbegriff'') of determinate elements ''that can be united into a whole'' by some law. The notable changes in this from his earlier explanation of the concept of set are these:'"`UNIQ--ref-000000EE-QINU`"' * the absence of any reference to a prior conceptual sphere or domain from which the elements of the set are drawn * the modification according to which the property or "law" which determines elementhood in the set "unites them into a whole" Cantor's overall reason for introducing a new conception of ''set'' was to support the development of his theory of transfinite numbers.'"`UNIQ--ref-000000EF-QINU`"' ==='"`UNIQ--h-29--QINU`"'The theory of transfinite numbers=== Cantor's intention was "to generalize in a rigorous way the very notion of number in itself ... by building transfinite and finite numbers, using the same principles."'"`UNIQ--ref-000000F0-QINU`"' In order to do this, Cantor employed the notion of set in an entirely new way:'"`UNIQ--ref-000000F1-QINU`"' * Cantor's previous notion of a set involved specifying a set of objects from some given domain, ''albeit'' one which was already well-defined. * Cantor's new notion of a set introduced the transfinite numbers in terms of the notion of a ''set'' of objects ''of that very same domain''. Briefly, Cantor defined (ordinal) numbers to be what can be obtained, by starting with the initial number ($0$) and applying two operations, which he called ''principles of generation'':'"`UNIQ--ref-000000F2-QINU`"' ::# the usual process of taking successors, which yields, for every given number $a$, its successor $a + 1$ ::# a new process of taking limits of increasing "sequences", which yields, after any given "sequence" of numbers without a last element, a number $b$ In this, Cantor seems not only to be introducing a new understanding for the term ''set'', but (almost) also to be proposing a new (and clearly perverse) understanding for the term ''sequence'', which is defined only for countable sets -- a fact that was at the heart of his demonstration that the set of reals $\mathbb{R}$ cannot be written as a sequence and, hence, is not countable. It has been noted that Cantor's mention of "sequences" of numbers rather than "sets" is, while inaccurate, of no consequence, "since the sequences in question are in their natural order and so are determined by the set of their members."'"`UNIQ--ref-000000F3-QINU`"' Cantor defined ''numbers'' (both finite and transfinite numbers) in terms of the notion of a ''class of numbers'', as follows:'"`UNIQ--ref-000000F4-QINU`"''"`UNIQ--ref-000000F5-QINU`"' ::Let $Ω$ denote the class of all (ordinal) numbers :::and $X$ range over sets :::and $S(X)$ be the least number greater than every number in $X$, given by one of the two "generating principles" noted above. :::and $X \text{ is a subset of } Ω ⇒ S(X) ∈ Ω$. ::Then we have ::* $0$ the least number is $∅$ (the null set) ::* $1$ is $S(0)$ or $\{0\}$ ::* $2$ is $S(1)$ or $\{0, 1\}$ ::* $n$ is $S(n - 1)$ or $\{0, 1, 2, … n-1\}$ :::$\vdots$ ::* $\omega$ is limit of $\{0, 1, 2, … \}$ (the set of all finite ordinals) ::* $\omega + 1$ is $S(\omega)$ :::$\vdots$ ::* $\omega_1$ (the set of all countable ordinals) :::$\vdots$ ::* $\omega_2$ (the set of all countable and $ℵ_1$ ordinals) :::$\vdots$ ::* $\omega_{\omega}$ (the set of all finite ordinals and $ℵ_k$ ordinals for non-negative integers $k$) :::$\vdots$ In order of increasing size, the (ordinal) numbers are then ::$0, 1, 2, ..., \omega, \omega+1, \omega+2, ..., \omega+\omega = ω·2, …, ω·n, ω·n +1, …,ω^2, ω^2+1, …, ω^ω, … \text{ and so on and on }$ Finally, ::$Ω$ is well-ordered by $<$ (there are no infinite descending sequences of $a_n$). In 1892, Cantor proved this theorem:'"`UNIQ--ref-000000F6-QINU`"' ::given any set $S$, there exists another set, what we now call the power set $p(S)$, whose cardinality is greater than $S$ (Cantor's Theorem). Reasoning by analogy, Cantor also argued that there is an entire infinite and very precise hierarchy of transfinite (cardinal) numbers, as follows:'"`UNIQ--ref-000000F7-QINU`"' # for a finite set of $n$ elements, its power set, i.e. $p(n)$, has exactly $2^n$ elements # the set of natural numbers $\mathbb{N}$ has power (cardinality) $ℵ_0$ # the power set of $\mathbb{N}$, i.e. $p(\mathbb{N})$, has cardinality $2^{ℵ_0}$ # the power set of this new set, i.e. $p(p(\mathbb{N}))$, has cardinality $2^{2^{ℵ_0}}$ # and so on ... thus defining an infinite hierarchy of transfinite cardinals ordered as follows: $ℵ_0 < 2^{ℵ_0} < 2^{2^{ℵ_0}} < … $. Cantor classified the transfinite ordinals and related them to cardinals as follows: * the "first number class" consisted of the finite ordinals, the set $\mathbb{N}$ of natural numbers with cardinality $ℵ_0$, all of which have only a finite set of predecessors. * the "second number class" was formed by $ω$ and all numbers following it (including $ω^ω$, etc.) with cardinality $ℵ_1$, all of which have only a set of predecessors with cardinality $ℵ_0$. * the "third number class" consisted of transfinite ordinals with cardinality $ℵ_2$, all of which have only a set of predecessors with cardinality $ℵ_1$. * and so on and so on.... The transfinite ordinals thus formed the basis of a well-defined scale of increasing transfinite cardinalities:'"`UNIQ--ref-000000F8-QINU`"' Since 1878, Cantor had known that the reals $\mathbb{R}$ formed a non-denumerable set, i.e. a set with a power higher than the naturals $\mathbb{N}$. Now he proved this further result:'"`UNIQ--ref-000000F9-QINU`"' ::The number of elements in the set of real numbers, which he had previously termed $c$, is the same as the number of elements in the power set of the natural numbers. In other words, he proved the equation $c = 2^{ℵ_0}$ to be true, meaning that the number of points of the continuum provided by the real line had exactly $2^{ℵ_0}$ points. All of this permitted a more precise formulation of Cantor's Continuum Hypothesis as $c = ℵ_1$.'"`UNIQ--ref-000000FA-QINU`"' A partial summary of Cantor's achievements arising from his theory of transfinite numbers includes an "almost modern" exposition of the theory of [[Well-ordered set|well-ordered sets]] and also the theory of [[Cardinal number|cardinal numbers]] and [[Ordinal number|ordinal numbers]].'"`UNIQ--ref-000000FB-QINU`"' In the view of some (but certainly not all) mathematicians, Cantor's study of infinite sets and transfinite numbers introduced little that was alien to a "natural foundation" for mathematics, which "would, after all … need to talk about sets of real numbers" and "should be able to cope with one-to-one correspondences and well-orderings."'"`UNIQ--ref-000000FC-QINU`"' Here is a succinct and robust defence of what Hilbert subsequently called "Cantor's paradise":'"`UNIQ--ref-000000FD-QINU`"' ::There are many mathematicians who will accept the ... theory of functions as developed in the 19th century, but will, if not reject, at least put aside the theory of transfinite numbers, on the grounds that it is not needed for analysis. Of course, on such grounds, one might also ask what analysis is needed for; and if the answer is basic physics, one might then ask what that is needed for. When it comes down to putting food in one's mouth, the 'need' for any real mathematics becomes somewhat tenuous. Cantor started us on an intellectual journey. One can peel off at any point; but no one should make a virtue of doing so. ==='"`UNIQ--h-30--QINU`"'The paradoxes=== Cantor came to recognize that his new notion of set, which he introduced to support the development of the transfinite numbers, was problematic:'"`UNIQ--ref-000000FE-QINU`"' ::not every property of numbers "unites the objects possessing it into a whole" A significant consequence of Cantor's general theory of sets not only for the process of mathematical rigourization generally, but also for what Hilbert would later state as his 2nd problem particularly, was this:'"`UNIQ--ref-000000FF-QINU`"' ::precisely when mathematicians were celebrating that "full rigor" had been finally attained, serious problems emerged for the foundations of set theory. The "serious problems" that emerged were, of course, paradoxes. Neither the initial introduction of infinite sets by others nor their use in his early theory of sets by Cantor himself had been problematic. However, his subsequent introduction of transfinite numbers and development of transfinite arithmetic made him aware of the potential for paradoxes within set theory. Cantor is said to have attributed the source of these paradoxes to the following:'"`UNIQ--ref-00000100-QINU`"' * the use (by Frege) of an unrestricted principle of comprehension * the acceptance (by Dedekind) of arbitrary subsets of a Universal Set (''Gedankenwelt'') More specifically, the claim is that Cantor himself traced the paradoxes to a faulty understanding (by others) of what constitutes a legitimate mathematical collection. For Cantor, the mathematically relevant notion of a collection is said to have been based on the "combinatorial concept" of a set:'"`UNIQ--ref-00000101-QINU`"' ::In order to be treated as a whole, [a mathematical collection] must be capable of being counted, in a broad sense of "count" which means ''well-orderable''. In contrast to this was the "logical concept" of a set, developed by Frege, accepted by Dedekind, and championed by Russell, which "treats collections as the extensIons of concepts":'"`UNIQ--ref-00000102-QINU`"' ::For a multiplicity to be treated as a mathematical whole, we must have some propositional function which acts as a rule for picking out all of the members. The point of this contrast rests on the claim that the set-theoretic paradoxes are a problem only for the logical concept of a set, which includes the inconsistent Comprehension Principle, and has Russell's Paradox as a result.'"`UNIQ--ref-00000103-QINU`"' The general consensus, both then and now, however, is that Cantor's own construction of the system of transfinite numbers introduced foundational problems in the form of paradoxes into mathematics.'"`UNIQ--ref-00000104-QINU`"' It is known that he was aware of at least two such paradoxes:'"`UNIQ--ref-00000105-QINU`"' '''Burali-Forti Paradox''' (paradox of the ordinals) ::As Cantor defined them, each transfinite ordinal is the order type of the set of its predecessors: ::* $ω$ is the order type of $\{0, 1, 2, 3, …\}$ ::* $ω+2$ is the order type of $\{0, 1, 2, 3, …, ω, ω+1\}$ ::* and so on, so that to each initial segment of the series of ordinals, there corresponds an immediately greater ordinal. ::Now, the "whole series" of all transfinite ordinals would form a well-ordered set, and to it there would, therefore, correspond a new ordinal number, $o$, that would have to be greater than all members of the "whole series", and in particular $o < o$. '''Cantor's paradox''' (paradox of the alephs): ::As Cantor defined them, to each aleph is the cardinality of a class of transfinite ordinals (as described above) ::If there existed a "set of all" cardinal numbers (alephs), applying Cantor's Theorem yields a new aleph $ℵ$, such that $ℵ < ℵ$. There are disagreements concerning Cantor's notion of number and how that notion related to the concept of a well-ordered set. There are also disagreements concerning the paradoxes of which he was actually aware and when he became aware of them. There is, on the other hand, general agreement that Cantor understood the paradoxes to be "a fatal blow to the 'logical' approaches to sets favoured by Frege and Dedekind" and that, as a result, he attempted to put forth views that were opposed to the "naïve assumption that all well-defined collections, or systems, are also 'consistent systems'."'"`UNIQ--ref-00000106-QINU`"' The paradoxes convinced both Hilbert and Dedekind that there were important doubts concerning the foundations of set theory. Cantor apparently planned to discuss the paradoxes and the problem of well-ordering in a paper that he never actually published, but the contents of which he discussed in correspondence with Dedekind and Hilbert.'"`UNIQ--ref-00000107-QINU`"''"`UNIQ--ref-00000108-QINU`"' Cantor did not regard the paradoxes (of which he was aware) as a crisis in set theory, but rather as a spur for the overall delimitation of sets. He considered the class $Ω$ of ordinals $\omega_n$ and the class of cardinals $ℵ_n$ to be "inconsistent multiplicities":'"`UNIQ--ref-00000109-QINU`"' He argued that whatever is deemed a ''set'' "can be well-ordered using a procedure whereby a well-ordering is defined through successive (recursive) choices":'"`UNIQ--ref-0000010A-QINU`"' ::The set must get well-ordered, else all of $Ω$ would be injectible into it, so that the set would have been an inconsistent multiplicity instead. Thus, in the ''Grundlagen'' Cantor introduced a distinction between totalities as sets and totalities as what came to be called "proper classes":'"`UNIQ--ref-0000010B-QINU`"' ::Every well-defined set has a power, but there are totalities, such as the totality of all whole numbers or of all powers, which have no power. His intention was to restrict the term ''set'' to "determinate infinites" (represented by the number classes) as distinguished from "absolute infinities" (represented by the totality of transfinite numbers or the totality of the number classes or powers). The late 19th century initially saw the scope of logic expand immensely, but then saw it contract. The work of Dedekind, Frege, et. al. appeared to link both propositional and predicate logic inextricably with set theory and the theory of relations. Subsequently, the discovery of the paradoxes led to changed understandings about logic, set theory, and mathematics:'"`UNIQ--ref-0000010C-QINU`"' * the theory of sets goes well beyond the logic of mathematics * the language of mathematics requires a strict formalization Paradoxes led to these changes without, ironically, leading to a single new theorem or metatheorem -- though they eventually led to new, very challenging axioms!'"`UNIQ--ref-0000010D-QINU`"' =='"`UNIQ--h-31--QINU`"'Axiomatic development of geometry== For two thousand years, Euclid's Elements, with its approach of proving its theorems starting from definitions and axioms, was unique in mathematics. It was not only "the one and only geometry," but also "the structural paradigm for all other fields of mathematics." The development of non-Euclidean geometries both arose out of and gave rise to questions about the geometry of Euclid. Euclid's ''Elements'' was based on 5 axioms and 5 postulates and had a logical structure that enabled the development of proofs. However, it also had serious deficiencies, consisting of concealed assumptions, meaningless definitions, and logical inadequacies.'"`UNIQ--ref-0000010E-QINU`"''"`UNIQ--ref-0000010F-QINU`"' * Specifically and very importantly, Gauss had pointed out that the notion of ''betweenness'' was often used in Euclid, but was never defined. * More generally, the discovery of non-Euclidean geometries had, in itself, stimulated a general determination among mathematicians to bring out unstated assumptions and either justify them or avoid them. ==='"`UNIQ--h-32--QINU`"'Early efforts by Pasch=== Chief among those asking and answering questions about Euclid's ''Elements'' was Moritz Pasch, who laboured a half century in the foundations of geometry, "a field that didn't really exist before he took a hard look at Euclid's Elements and found a number of hidden assumptions in it that nobody had noticed before."'"`UNIQ--ref-00000110-QINU`"' Pasch observed that Euclid's "definitions" of some of his common notions ($point$, $line$, and $plane$) were insufficient. To say, for example, that a point is "that which has no part" is not say what a point is, since we then further need to say what a "part" is! Others before Pasch had realized that attempting to define every concept of a mathematical discipline would result in an infinite regress of definitions. It was he who raised this issue specifically for geometry by asking, What terms of geometry must be left undefined? In answering this question for projective geometry, Pasch left these three primitive terms undefined, choosing the last two because, as he himself remarked, no one has actually had any experience of a line or a plane.:'"`UNIQ--ref-00000111-QINU`"' * $point$ * $line$ $segment$ -- rather than Euclid's $line$ * $planar$ $section$ -- rather than Euclid's $plane$ Agreeing with Gauss, Pasch addressed the deficiency in Euclid's geometry arising from the absence of axioms relating to the order of points on a line and in the plane, such as the following:'"`UNIQ--ref-00000112-QINU`"' * if a point $B$ is between a point $A$ and a point $C$, then $C$ is not between $A$ and $B$. * every line divides a plane into two parts. * if a line enters a triangle $ABC$ through the side $AB$ and does not pass through $C$, then it must leave the triangle either between $B$ and $C$ or between $C$ and $A$. Before Pasch, students could draw diagrams to illustrate these things, but geometers had no basis for dealing logically with the observations given by those diagrams. In 1882, Pasch published ''Lectures on Modern Geometry'', which has been called a "truly satisfactory and … serious instance of axiomatization of a branch of knowledge."'"`UNIQ--ref-00000113-QINU`"' Pasch's book was an axiomatic development of projective geometry embodying the following ideas about axioms:'"`UNIQ--ref-00000114-QINU`"' * axioms were assertions about terms and notions, which remained otherwise undefined * experience could suggest axioms, but could not be appealed to in proofs from axioms Pasch's axioms, then, served two purposes: they (implicitly) gave meaning to the undefined terms and notions of his geometry and they (alone) yielded its theorems. Pasch believed that a too great reliance on physical intuition was the root cause of the problems in geometry. He supported his belief referring to an application of the following principle of duality that had been known for more than a half century:'"`UNIQ--ref-00000115-QINU`"' ::Any true statement of projective ''plane'' geometry gives rise to another, equally true, dual statement obtained by substituting 'point' for 'line', 'collinear' for 'concurrent', 'meet' for 'join', and vice versa, wherever these words occur in the former. (For projective ''space'' geometry, duality holds for points and planes.) Pasch noted that our physical intuitions about points and lines contradict this duality principle and, as a consequence, though we ''know'' the principle to be true, we don't really ''believe'' that the terms 'points' and 'lines' are really interchangeable! Pasch believed that argument in mathematics should proceed not intuitively from physical interpretations of primitive terms, but logically from proofs based on axioms that related those primitive terms to one another. He saw that two different, but related tasks underlay the development of an axiomatic theory of geometry:'"`UNIQ--ref-00000116-QINU`"' * specifically, the identification of the hidden assumptions of Euclid's (and others') geometry * generally, the determination of what actually constitutes an axiom system Pasch played a role in accomplishing both of these tasks. Hilbert was greatly influenced by his work. ==='"`UNIQ--h-33--QINU`"'Hilbert's ''Grundlagen''=== Hilbert understood that the arithmetization of analysis and the axiomatization of arithmetic were notable achievements of nineteenth century mathematics. Through them, most of mathematics had been provided with a strict axiomatic foundation. What he objected to was any suggestion that the concepts of arithmetic alone were susceptible of a fully rigorous treatment. He felt that another, equally notable achievement of the nineteenth century was the flourishing of geometry and, in particular, the development of non-Euclidean geometries. What remained, then, was to establish a purely formal and deductive basis for geometry.'"`UNIQ--ref-00000117-QINU`"' Hilbert himself did the pioneer work towards the end of giving geometry the purely formal character found in algebra and analysis. In 1893, he prepared a course on non-Euclidean geometry. He was familiar with Pasch's previous work in geometry and adopted his axiomatic approach in preparing the course. In 1899, the year before his Paris Problems Address, he published his theory of geometry, ''Grundlagen der Geometrie'', the importance of which can be summarized as follows: * it provided an axiomatic foundation that addressed the deficiencies of Euclid's ''Elements'' * it examined meta-mathematical notions associated with the axiomatization process itself Hilbert intended that ''Grundlagen'' serve multiple purposes: a specifically geometrical purpose, a larger mathematical purpose involving geometry and analysis, and an overall meta-mathematical purpose.'"`UNIQ--ref-00000118-QINU`"' ==='"`UNIQ--h-34--QINU`"'The specific geometric purpose=== The specific purpose of ''Grundlagen'' was to lay down a foundation, different from the evidence of intuition, by means of which all (known) theorems of Euclidean geometry might be rigourously deduced in a manner that was true to ''the spirit'', if not to ''the letter'' of Euclid's ''Elements''. As the "sufficiently general and comprehensive principle" necessary for his purpose, Hilbert chose the axiomatic method.'"`UNIQ--ref-00000119-QINU`"' Hilbert's geometry was based on the following:'"`UNIQ--ref-0000011A-QINU`"''"`UNIQ--ref-0000011B-QINU`"' * primitive elements: $point$, $line$, and $plane$ * primitive relations: ::of ''incidence'' between (i) a point and a straight line and (ii) a straight line and a plane ::of ''order'' between (iii) three points ::of ''congruence'' between (iv) two pairs of points ('segments') and (v) two equivalence classes of point triples ('angles'). * axioms, in groups: ''incidence'', ''order'', ''parallelism'', ''congruence'', and ''continuity''. In 1902, an authorized English translation of the ''Grundlagen'' was published. It is instructive to exaimine how Hilbert himself first presented his theory. In the first two paragraphs, Hilbert introduced (1) the primitive terms of his geometry and (2) the groups of axioms connecting these terms:'"`UNIQ--ref-0000011C-QINU`"' ::1. Let us consider three distinct systems of things. ::* The things composing the first system, we will call $points$ and designate them by the letters $A, B, C,. . .$ ::* those of the second, we will call straight $lines$ and designate them by the letters $a, b, c,. . .$ ::* those of the third system, we will call $planes$ and designate them by the Greek letters $α, β, γ,. . .$ ::The points are called the elements of linear geometry; the points and straight lines, the elements of plane geometry; and the points, lines, and planes, the elements of the geometry of space or the elements of space. ::2. We think of these points, straight lines, and planes as having certain mutual relations, which we indicate by means of such words as "are situated," "between," "parallel," "congruent," "continuous," etc. The complete and exact description of these relations follows as a consequence of the axioms of geometry. These axioms may be arranged in five groups. Each of these groups expresses, by itself, certain related fundamental facts of our intuition. We will name these groups as follows: :::I, 1–7. Axioms of connection. :::II, 1–5. Axioms of order. :::III. Axiom of parallels (Euclid's axiom). :::IV, 1–6. Axioms of congruence. :::V. Axiom of continuity (Archimedes's axiom). Following this, Hilbert introduced the axioms of his geometry, one group at a time, noting some alternative, equivalent language used to express them and noting some theorems derivable from them. Here are some interesting details about the first two groups of axioms:'"`UNIQ--ref-0000011D-QINU`"' ::Group I: The axioms of this group establish a connection between the concepts indicated above; namely, points, straight lines, and planes…. :Instead of [saying, for example, that "two distinct points $A$ and $B$ always completely "determine" a straight line $a$,] we may also employ other forms of expression; for example, we may say $A$ "lies upon" $a$, $A$ "is a point of" $a$, $a$ "goes through" $A$ "and through" $B$, $a$ "joins" $A$ "and" or "with" $B$, etc. If $A$ lies upon $a$ and at the same time upon another straight line $b$, we make use also of the expression: "The straight lines" $a$ "and" $b$ "have the point $A$ in common," etc. ::Group II: The axioms of this group define the idea expressed by the word "between," and make possible, upon the basis of this idea, an order of sequence of the points upon a straight line, in a plane, and in space. The points of a straight line have a certain relation to one another which the word "between" serves to describe. As Hilbert intended, the primitive relations connected the primitive elements and the axioms of connection (incidence), order, and congruence defined (implicitly) the three primitive relations. The primitive elements and relations remained otherwise undefined. Taken together, the axioms expressed "certain related fundamental facts of our [spatial] intuition."'"`UNIQ--ref-0000011E-QINU`"''"`UNIQ--ref-0000011F-QINU`"''"`UNIQ--ref-00000120-QINU`"' The sense in which axioms are ''implicit definitions'' is made apparent by considering that if the terms of a theory (the theory of geometry, for example) support multiple interpretations, then the sentences of that theory, and sets of those sentences, provide definitions ''of a certain kind'':'"`UNIQ--ref-00000121-QINU`"' ::A set $AX$ of sentences containing $n$ (geometric) terms defines an $n-place$ relation $R_{AX}$ holding of just those $n-tuples$ which, when taken respectively as the interpretations of $AX$'s (geometric) terms, render the members of $AX$ true. Thus the $n$ terms of $AX$ serve as "place-holders" that are devoid of meaning in themselves, but that yield to multiple interpretations. The initial choice of a system of axioms was not, then, the end of an enquiry into the foundations of a theory. Rather, the enquiry would not be complete until the axioms, which define the concepts and relations of a theory, are such that ''no other characteristics of those concepts and relations can be added''.'"`UNIQ--ref-00000122-QINU`"' Enquiry into foundations is thus an evolution in the direction of an ever better understanding of the basic concepts and relations of a theory, for which the axioms provide definitions. Hence, further experience working with a theory can lead to a widening of those definitions and, consequently, a widening of our understanding of those basic concepts and of the entire theory itself.'"`UNIQ--ref-00000123-QINU`"' Thus, in developing his theory of geometry, Hilbert discarded the "intuitive-empirical" level of the older geometrical views by making all his assumptions explicit and by giving his undefined terms no properties beyond those indicated in the axioms:'"`UNIQ--ref-00000124-QINU`"' * points, lines, and planes were to be understood as elements of certain given sets * undefined relations were to be treated as abstract correspondences or mappings In this regard, and as Hilbert put it in a letter to Frege, "every theory is only an abstract structure or schema of concepts together with their necessary relations to one another, [while] the basic elements can be thought of in any way one likes."'"`UNIQ--ref-00000125-QINU`"' The following anecdote speaks to the earnestness of Hilbert's intentions with respect to leaving his notions undefined:'"`UNIQ--ref-00000126-QINU`"' ::In 1891, Hilbert attended a lecture on the foundations of geometry given at the Deutsche Mathematiker-Vereinigung meeting in Halle. Decades later (in 1935) it was reported that Hilbert came out of that meeting greatly excited by what he had just heard and made his famous declaration: "it must be possible to replace 'point, line, and plane' with 'table, chair, and beer mug' without thereby changing the validity of the theorems of geometry." An example of a serious deficiency in the ''Elements'' was Euclid's use of the same word "equal" for the many different equivalence relations that are important for geometry, among which are these:'"`UNIQ--ref-00000127-QINU`"' * equality * congruence of segments * congruence of angles * similarity for triangles and other figures * having same area for figures * having same volume for three dimensional polyhedra Hilbert (and those who continued his work) distinguished these relations by providing explicit definitions for them, using different symbols for them, and proving their properties -- with the exception of "equality" which is not a relation of geometry, but of logic. Hilbert would have understood this from the example of Peano's first axiomatization of arithmetic. Among the insufficiencies of Euclid's Elements noted above was the lack of any definition (adequate or otherwise) for the notion of betweenness. Hilbert addressed this with specific axioms for this notion, known as the Axioms of Order, stating the first four as follows:'"`UNIQ--ref-00000128-QINU`"' ::II.1. If $A, B, C$ are points of a straight line and $B$ lies between $A$ and $C$, then $B$ lies also between $C$ and $A$. ::II.2. If $A$ and $C$ are two points of a straight line, then there exists at least one point $B$ lying between $A$ and $C$ and at least one point $D$ so situated that $C$ lies between $A$ and $D$ ::II.3. Of any three points situated on a straight line, there is always one and only one which lies between the other two. ::II.4. Any four points $A, B, C, D$ of a straight line can always be so arranged that $B$ shall lie between $A$ and $C$ and also between $A$ and $D$, and, furthermore, that $C$ shall lie between $A$ and $D$ and also between $B$ and $D$. Hilbert next introduced the following definition, followed by the fifth and last axiom of order:'"`UNIQ--ref-00000129-QINU`"' ::::''Definition''. We will call the system of two points $A$ and $B$, lying upon a straight line, a segment and denote it by $AB$ or $BA$. The points lying between $A$ and $B$ are called the points of the segment $AB$ or the points lying within the segment $AB$. All other points of the straight line are referred to as the points lying outside the segment $AB$. The points $A$ and $B$ are called the extremities of the segment $AB$. ::II.5. Let $A, B, C$ be three points not lying in the same straight line and let $a$ be a straight line lying in the plane $ABC$ and not passing through any of the points $A, B, C$. Then, if the straight line $a$ passes through a point of the segment $AB$, it will also pass through either a point of the segment $BC$ or a point of the segment $AC$. ::Axioms II, 1–4 contain statements concerning the points of a straight line only, and, hence, we will call them the linear axioms of group II. Axiom II, 5 relates to the elements of plane geometry and, consequently, shall be called the plane axiom of group II. As is obvious from reading his introduction, Hilbert developed his geometry axiomatically, but stated it informally, i.e. in ordinary language rather than in the language of a formal logic, a consequence of the simple fact that, as noted previously, he lacked the logical tools to do otherwise. Also obvious in Hilbert's presentation is the fact that running alongside his mathematics was a non-mathematical urging: ::Hilbert (writing in German) clearly wanted us (reading in English) to accept that his geometry, developed axiomatically using his undefined terms, was a (faithful) translation of the notions and concepts of our geometrical intuition(s) expressed in our ordinary language(s). In this regard, it is interesting to compare Hilbert's informally stated axioms of order with axioms stated just slightly more formally. The following is a presentation of axioms of order for plane geometry:'"`UNIQ--ref-0000012A-QINU`"' :Given the following: :* A set $\alpha$ called a ''plane'', elements $P, Q, R, ... $ of this set called ''points'', and certain subsets $l, m, n, ... $of the plane called ''lines''. :* An undefined relation, symbolized $∗$. :Definitions: :* A line $l$ is the set of all points $P$ in the plane such that $P \in l$. :* Two lines $l, m$ are equal if $P \in l \iff P \in m$, for all points $P$. :* Points $P, Q$ are ''collinear'' if there is a line $l$ in the plane such that $P, Q \in l$. :Axioms of order are these: ::# If $A ∗ B ∗ C$, then both $A, B, C \in l$ for some line $l$ and also $C ∗ B ∗ A$. ::# Given two distinct points $A$ and $B$, there exists a point $C$ such that $A ∗ B ∗ C$. ::# If $A, B, C \in l$, then exactly one of the statements $A ∗ B ∗ C$, $A ∗ C ∗ B$, and $B ∗ A ∗ C$ is true. ::# Let $A, B, C \notin l$ be non-''collinear'' points. If there exists $D \in l$ so that $A ∗ D ∗ C$, then there exists an $X \in l$ such that $A ∗ X ∗ B$ or $B ∗ X ∗ C$. Unlike Hilbert's presentation, these axioms of order develop the undefined geometric property $∗$ without appeal to the ordinary language words that we use to express our geometrical intuitions. A linguistically sparer presentation would result from omitting entirely the terms "point," "line," and "plane". ==='"`UNIQ--h-35--QINU`"'The larger mathematical purpose=== The larger purpose of ''Grundlagen'' was to provide an axiomatic foundation sufficient not only for Euclid's geometry, but also for the various non-Euclidean geometries and, further, to enable those various theories of geometry to be related to other mathematical theories, specifically, the theory of real numbers. Hilbert described the specific purpose of ''Grundlagen'' as an attempt to lay down a "simple" and "complete" system of "mutually independent" axioms, from which all known theorems of (Euclidean) geometry might be deduced. His larger overall purpose was to provide a foundation both different from the evidence of intuition and sufficient not only for Euclid's geometry, but (eventually) also for the various non-Euclidean geometries.'"`UNIQ--ref-0000012B-QINU`"' In his early (thru 1905) writings, Hilbert considered axiomatic systems to be ''open'' systems:'"`UNIQ--ref-0000012C-QINU`"' ::If geometry is to serve as a model for the treatment of physical axioms, we shall try first by a small number of axioms to include as large a class as possible of physical phenomena, and then by adjoining new axioms to arrive gradually at the more special theories. Hilbert elaborated this in stating what he considered to be "the principal task of non-Euclidean geometry":'"`UNIQ--ref-0000012D-QINU`"' ::constructing the various ''possible'' geometries by the successive introduction of elementary axioms, up until the ''final construction of the only remaining one'', Euclidean geometry. This process of successively introducing axioms has been used in at least one presentation of Hilbert's axiom system for plane geometry, whose author described the process as follows:'"`UNIQ--ref-0000012E-QINU`"' ::As we introduce Hilbert's axioms, we will gradually put more and more restrictions on these [basic] ingredients [points and lines in a plane] and in the end they will essentially determine Euclidean plane geometry uniquely. Finally, the introductory paragraphs of Hilbert's first (German) edition mention only one axiom of continuity, the axiom of Archimedes. However, Hilbert added the following remark to the subsequent (French and English) translations: ::Remark. To the preceding five groups of axioms, we may add the following one, which, although not of a purely geometrical nature, merits particular attention from a theoretical point of view. It may be expressed in the following form: ::::Axiom of Completeness. (Vollstandigkeit): To a system of points, straight lines, and planes, it is impossible to add other elements in such a manner that the system thus generalized shall form a new geometry obeying all of the five groups of axioms. In other words, the elements of geometry form a system which is not susceptible of extension, if we regard the five groups of axioms as valid. Thus, we can say that, even in his first edition, Hilbert introduced two axioms of continuity, which have come to be identified in the subsequent literature as V.1 and V.2: ::V.1 the Archimedean axiom ::V.2 the axiom of completeness and about which we know the following:'"`UNIQ--ref-0000012F-QINU`"' :* Axiom V.1 allows the measurement of segments and angles using real numbers…. Since Hilbert, this axiom is also known as the axiom of measurement…. :* There are several [alternative] axioms for completeness, with very similar implications, which nevertheless have slight but deep differences…. Hilbert [himself] suggested different axioms of continuity in different editions of his Foundations of Geometry. :* The version of axiom V.2 introduced in Hilbert's first edition is based on Cantor's definition of the real numbers. Alternative versions are based on the definitions of Dedekind and Weierstrass. Speaking generally, we can say this about how the hierarchy of geometries is related to the axioms as Hilbert has grouped them:'"`UNIQ--ref-00000130-QINU`"' * A ''Hilbert plane'' is any model for two-dimensional geometry where Hilbert's axioms of incidence, order, and congruence hold. The axioms of the Hilbert plane form the basis of both Euclidean and non-Euclidean geometry. Neither the axioms of continuity (the Archimedean axiom and the axiom of completeness) nor the parallel axiom need to hold for an arbitrary Hilbert plane. The geometry of the Hilbert plane has been termed ''neutral geometry'', "because it neither affirms nor denies the parallel axiom."'"`UNIQ--ref-00000131-QINU`"' * A ''Pythagorean plane'' is a Hilbert plane for which the axiom of parallelism holds. * A ''Euclidean plane'' is a Pythagorean plane for which the axioms of continuity hold. Hilbert knew that developing Euclid's geometry with "a simple and complete set of independent axioms" required decisions resulting in his being true to ''the spirit'' rather than to ''the letter'' of the ''Elements''. Among those decisions, the following two are closely related: * the introduction of $circle$ as a defined (non-primitive) unaxiomatized term * the use of a "completeness" axiom that introduced real numbers Indeed, some modern authors have suggested that "a more natural way to do geometry" would result from an approach such as follows:'"`UNIQ--ref-00000132-QINU`"' * introducing circles in a natural, classical way * introducing continuity in a way directly related to proofs. However, adding $circle$ to the other primitive terms would have introduced a redundancy, a criticism that Hilbert himself levelled against Pasch's geometry. Hilbert's intention was that the assumptions (terms and axioms) should be in some sense a minimal set necessary to prove the propositions and support the construction. Even so, it would have been possible to introduce axioms for circles that supported the completeness necessary for various propositions and constructions of Euclid's ''Elements'' without introducing the real numbers. Here are two such axioms:'"`UNIQ--ref-00000133-QINU`"' ::''Line-circle Intersection property'': A line that contains a point inside a circle does intersect the circle. ::''Circle-circle intersection property'': Given two circles $\gamma$, $\delta$, if $\delta$ contains at least one point inside $\gamma$ and at least one point outside $\gamma$, then $\gamma$ and $\delta$ will meet. However, Hilbert's larger mathematical purpose was broader than doing geometry in a rigourous, natural way. His choice of a more powerful axiom of completeness, an easy road towards his larger mathematical and meta-mathematical purposes, enabled the following: * expanding the ''Euclidean'' plane to the ''Cartesian'' plane * proving of the relative consistency of geometry and analysis * establishing a categorical axiom system for geometry ==='"`UNIQ--h-36--QINU`"'The overall meta-mathematical purpose=== The overall purpose of ''Grundlagen'' was to examine various meta-mathematical notions that applied not to mathematical objects, such as points, lines, integers, reals, etc., but rather to axioms of mathematical theories, such as geometry and analysis. An early example of Hilbert's concern with the meta-mathematics of axiomatization arose from his familiarity with Pasch's axioms for geometry, which he knew included a redundancy. Specifically, Pasch's Archimedean axiom could be derived from others in his system. Hilbert considered this to be a deficiency. Even at this early date, then, Hilbert understood that the axioms for a geometry should be, in some sense, a ''minimal'' set of assertions from which the whole of the geometry could be deduced.'"`UNIQ--ref-00000134-QINU`"' These are the meta-mathematical notions that Hilbert (to some extent) examined in ''Grundlagen'' and applied to its system of axioms: ''simplicity'', ''completeness'', (mutual) ''independence'', ''compatibility'', and ''consistency''. For Hilbert, the simplicity of an axiom was its intention to contain or express "no more than a single idea." This notion was little referred to subsequently and was never formally defined. It was apparently received from Hertz and has been called an "aesthetic desideratum" of no mathematical significance.'"`UNIQ--ref-00000135-QINU`"' The notion of completeness that Hilbert required of his axioms of geometry was what he would have required of an adequate axiomatization of any discipline, namely, that the axioms should yield all the known theorems of the discipline in question. The ''Grundlagen'' itself was evidence that Hilbert's axioms were indeed complete in this sense that he had derived from them all the known theorems in which he was interested, either of Euclidean geometry or, independently of the parallel postulate, of "absolute" or "neutral" geometry. However, ''evidence'' for completeness is not ''proof'' of completeness. Hilbert had no method of formally proving the completeness of his axioms that corresponded to his formal proof of their independence.'"`UNIQ--ref-00000136-QINU`"' In ''Grundlagen'', Hilbert actually succeeded (more or less) in demonstrating the following:'"`UNIQ--ref-00000137-QINU`"''"`UNIQ--ref-00000138-QINU`"' * the independence of the SAS axiom, of the axiom of parallels from the other Euclidean axioms, and of some important theorems from specific groups of axioms * the consistency of various sub-groups of the axioms * the relative consistency of the entire set of axioms for Euclidean geometry, assuming the consistency of the real number system * various relations of provability Hilbert's demonstrations of the consistency and the independence of his axioms were demonstrations he made relative to a familiar, background theory whose consistency was accepted. More specifically, he proved that the consistency of geometry could be reduced to proving the consistency of arithmetic. The method he used to do this was as follows:'"`UNIQ--ref-00000139-QINU`"''"`UNIQ--ref-0000013A-QINU`"' * ''Consistency'': Given a set $AX$ of sentences (described as above) and a familiar, background theory $B$, which is assumed to be consistent, construct an interpretation of the $n$ terms of $AX$ under which the members of $AX$ express theorems of $B$. This interpretation is an $n-tuple$ satisfying the relation $R_{AX}$ defined by $AX$. Its existence demonstrates the satisfiability of $R_{AX}$ and consequently the consistency of $AX$, relative to that of $B$. * ''Independence'': Given set $AX$ of sentences, another statement $I$, and a familiar, background theory $B$, which is assumed to be consistent, construct an interpretation of $AX$'s and $I$'s terms under which the members of $AX$ express theorems of $B$, while $I$ expresses the negation of a theorem of $B$. Proceeding as above, the consistency of $AX \cup \{\sim I\}$ relative to that of $B$ demonstrates the independence of $I$ from $AX$, relative to consistency of $B$. As Hilbert understood it, consistency applied to the abstract structure of concepts and relations that were defined by $AX$ when its (geometric) terms were taken as place-holders. The consistency that he had in mind held of $AX_\mathbb{G}$ $\iff$ it held of $AX_\mathbb{R}$, since both shared (were instances of) the same abstract structure. '"`UNIQ--ref-0000013B-QINU`"' Equivalently, if there is an interpretation under which the sentences of $AX_\mathbb{G}$ expressed truths of $AX_\mathbb{R}$, then the question of the consistency for $AX_\mathbb{G}$ relative to $AX_\mathbb{R}$ was answered in the affirmative. In the context of formal theories, Hilbert's conception of consistency and his associated methodology for consistency-proofs are, for the most part standard today.[317] All this notwithstanding, the question has been asked, Why did Hilbert actually address the consistency of his axioms? It seems unlikely that he really entertained the possibility that Euclidean geometry contained contradictions, since he conceived it as "an empirically motivated discipline, turned into a purely mathematical science after a long, historical process of evolution and depuration." Further, Hilbert had (in his first German edition) presented a model of Euclidean geometry "on a countable, proper sub-field—of whose consistency he may have been confident—and not the whole field of real numbers." The issue of continuity of the real numbers might have raised difficulties, but there were no such difficulties arising from these fields of numbers. The suggested explanation is that Hilbert's attention to consistency arose as a result of his belief that "the axiomatic treatment of geometry was part of a larger enterprise, relevant also for other physical theories." He considered that contradictions might have been introduced into geometry as a result of the particular way in which he had formulated his axioms, not only in order to account for the theorems of physical science, but also as a result of the recent development of non-Euclidean geometries.[318] The rigour required for the axiomatic analysis underlying Grundlagen made necessary many additions, corrections, and improvements over the years following the books first edition. Most of these changes concerned only details. The basic structure of Grundlagen (the groups of axioms, the theorems considered, and the innovative methodological approach) remained unchanged through many editions.[319] The ideas and methods that Hilbert put to work in his early efforts at axiomatization not only made possible at the time a foundation for geometry and the theory of real numbers, but also are still at work today shaping contemporary mathematical practice.[320] Two curious aspects of Grundlagen There are two somewhat curious aspects of Hilbert's understanding of the axiomatization of geometry: the role of intuition in developing axioms the relationship of geometry to the physical sciences The influence of both of these is reflected in the manner in which Hilbert discussed the axioms of his geometry -- see Some specifics of the Hilbert axiomatization. With respect to the role of intuition, even if it was not to serve as the foundation for geometry, Hilbert nevertheless understood that the process of axiomatization began with intuitions of a domain of facts [Tatsachen]. In summary, Hilbert's description of the way the axiomatic method proceeds is as follows:[321] it analyzes the theorems and concepts of a mathematical theory it isolates the basic principles that correspond to intuitive ideas it formalizes these principles as axioms. In 1905, in a course titled "The Logical Principles of Mathematical Thinking," Hilbert presented his geometry anew. His discussion, which included the many corrections and additions introduced since 1900, started with the same three kinds of undefined elements: points, lines, and planes. He described this choice as "arbitrary," by which he meant constrained not merely by the mathematical requirement of consistency, but also "by the need to remain close to the 'intuitive facts of geometry'." Thus, instead of his three chosen elements, Hilbert said he could have started with "circles and spheres," formulating axioms of geometry "that are still in agreement with the usual, intuitive geometry."[322] With respect to the relationship of geometry to the physical sciences, Hilbert viewed the axiomatization of geometry as part of a larger task: the axiomatization of natural science, in general, and of physics, especially mechanics, in particular. This view stemmed in part from his having taught (between 1897-1899) seminars on mechanics and also a full course on mechanics. In this latter, he compared geometry and mechanics as follows:[323] Geometry also [like mechanics] emerges from the observation of nature, from experience. To this extent, it is an experimental science. ... But its experimental foundations are so irrefutably and so generally acknowledged, they have been confirmed to such a degree, that no further proof of them is deemed necessary. Moreover, all that is needed is to derive these foundations from a minimal set of independent axioms and thus to construct the whole edifice of geometry by purely logical means. In this way [i.e., by means of the axiomatic treatment] geometry is turned into a pure mathematical science. Hilbert's view of geometry as a close relation of mechanics also had roots in his acquaintance with Hertz and his knowledge and respect for Hertz's writings. In his 1899 course of Euclidean geometry, Hilbert stated his goals for the axiomatization of geometry as follows:[324] a complete description, by means of independent statements, of the basic facts from which all known theorems of geometry can be derived Hilbert credited Hertz's Principles of Mechanics as the source of this statement. Hilbert's 2nd problem In his 1900 lecture to the International Congress of Mathematicians in Paris, David Hilbert presented a list of open problems in mathematics. He expressed the 2nd of these problems, known variously as the compatibility of the arithmetical axioms and the consistency of arithmetic, as follows:[325] When we are engaged in investigating the foundations of a science, we must set up a system of axioms which contains an exact and complete description of the relations subsisting between the elementary ideas of that science. The axioms so set up are at the same time the definitions of those elementary ideas; and no statement within the realm of the science whose foundation we are testing is held to be correct unless it can be derived from those axioms by means of a finite number of logical steps. Upon closer consideration the question arises: Whether, in any way, certain statements of single axioms depend upon one another, and whether the axioms may not therefore contain certain parts in common, which must be isolated if one wishes to arrive at a system of axioms that shall be altogether independent of one another. But above all I wish to designate the following as the most important among the numerous questions which can be asked with regard to the axioms: To prove that they are not contradictory, that is, that a definite number of logical steps based upon them can never lead to contradictory results. Thus, we can see that Hilbert's 2nd problem arose from a principle that had emerged in his thought during his work on the foundations of geometry:[326] "mathematical existence is nothing other than consistency" Hilbert had successfully provided axioms for geometry. He now sought to introduce a program for axiomatizing the whole of mathematics. Such a program would require not only axioms for analysis, but also a direct proof of the consistency of analysis:[327] ... a direct method is needed for the proof of the compatibility of the arithmetical axioms. The axioms of arithmetic are essentially nothing else than the known rules of calculation, with the addition of the axiom of continuity. I recently … replaced the axiom of continuity by two simpler axioms, namely, the well-known axiom of Archimedes, and a new axiom essentially as follows: that numbers form a system of things which is capable of no further extension, as long as all the other axioms hold (axiom of completeness). I am convinced that it must be possible to find a direct proof for the compatibility of the arithmetical axioms, by means of a careful study and suitable modification of the known methods of reasoning in the theory of irrational numbers. Clearly, Hilbert's wording of the "new axiom" (of completeness) for arithmetic used in the 2nd problem parallels his initial wording of the axiom of completeness that he introduced for geometry in the Grundlagen. More generally, however, the whole of the 2nd problem is itself heir to the three decades-long effort, which preceded his lecture, to axiomatize not only geometry, but also arithmetic, set theory, and logic itself. In the decades following his initial statement of the 2nd problem, Hilbert made it more explicit by developing "a formal system of explicit assumptions" (see Axiom and Axiomatic method) upon which he intended to base the methods of mathematical reasoning, eventually stipulating that any such system must be shown to have these characteristics:[328][329] the assumptions should be "independent" of one another (see Independence) the assumptions should be "consistent" (free of contradictions) (see Consistency) the assumptions should be "complete" (represents all the truths of mathematics) (see Completeness) there should be a procedure for deciding whether any statement expressed using the system is true or not (see Decision problem and Undecidability) Hilbert's 2nd problem is said by some to have been solved, albeit in a negative sense, by K. Gödel (see Hilbert problems and Gödel incompleteness theorem). ↑ Dedekind (1888) p. 35 cited in Gillies p. 8 ↑ Dasgupta p. 29 ↑ Jones (1996) ↑ Ewald (2002) p. 2 ↑ Compare this section with a related discussion of non-mathematical issues in the Arithmetization of analysis program. ↑ Renfro ↑ Waterhouse p. 435, cited in Renfro. In Renfro's opinion, Cantor's "disagreement is only with the words, not with Gauss' actual ideas…. Cantor objected not to Gauss' statement in context but to the meaning attributed to it by his own contemporaries." ↑ O'Connor and Robertson (2005) ↑ Grattan-Guinness p. 125 footnote cited in Renfro ↑ Boyer (1939) pp. 270-271 cited in Renfro ↑ Wikipedia "C S Peirce" § "Mathematics" emphasis added. Readers are encouraged to review the Mathematics section of this Wikipedia article for its notes and references for these and others of Peirce's discoveries. ↑ Russinoff p. 451 emphasis added ↑ Edwards cited in Russinoff p. 451 ↑ O'Connor and Robertson (2002) "Frege" ↑ Gillies p. 78 ↑ Reck (2013) ↑ Russell cited in O'Connor and Robertson (2002) "Frege" ↑ O'Connor and Robertson ↑ Encyclopedia Britannica "Augustus De Morgan" ↑ Hodges (2015) ↑ Reck (2013) Abstract ↑ Ewald (2002) ↑ For related modern mathematical notions, see Abstraction, mathematical, Abstraction of actual infinity, Abstraction of potential realizability, and Infinity ↑ Netz ↑ Spalt cited in O'Connor and Robertson (2002) ↑ Netz, Saito, and Tchernetska cited in O'Connor and Robertson (2002) ↑ Kirschner 2.6 Mathematics ↑ O'Connor and Robertson, (2002) ↑ O'Connor and Robertson, (2002). The property that an infinite set can be put into one-to-one correspondence with a proper subset of itself is today known as the Hilbert infinite hotel property. ↑ Waterhouse cited in O'Connor and Robertson (2002) ↑ Bolzano cited in O'Connor and Robertson (1996) (2002) (2005) ↑ Dedekind (1930/32) Vol. 1, pp. 46-47, quoted in Kanamori (2012) p. 49, cited in Reck (2013) slide 5 ↑ Ferreirós (2011b) §1 ↑ Gillies p. 8 emphasis added ↑ Reck (2011) §2.2 ↑ Gillies p. 8 ↑ Bochenski cited in Boyer (1968) p. 633 ↑ See the Historical sketch in Mathematical logic ↑ Moore p. 96 ↑ Ferreirós (2001) p. 442 ↑ Jones "The History of Formal Logic" ↑ Jones "A Short History of Rigour in Mathematics" ↑ Boyer (1968) p. 633 ↑ van Benthem et. al. (2014). The expository material on Aristotelian syllogisms is excerpted from Chapter 3 of the text. ↑ Russinoff p. 454 ↑ Josiah Royce quoted in Shen cited in Russinoff p. 451 ↑ Boyer (1968) pp. 633-634 emphasis added ↑ Peacock cited in O'Connor and Robertson (2015) ↑ Peacock cited in O'Connor and Robertson (2015) emphasis added ↑ O'Connor and Robertson "Augustus De Morgan" ↑ De Morgan (1849) cited in Barnett p. 3 ↑ Barnett p. 1 ↑ Gillies pp. 74-75 ↑ Boyer (1968) pp. 633-634 ↑ See Boolean algebra. ↑ Burris (2014) §4. ↑ Burris (2014). Burris provides a detailed, step-by-step description of the process that Boole used to analyze arguments using his algebraic logic. ↑ Boyer (1968) p.635 ↑ Burris (2014) provides a selection of examples illustrating the workings of his methods, including "a substantial example" of the workings of Boole's General Method found in his 1854 work. ↑ O'Connor and Robertson (2004) emphasis added ↑ van Benthem 2. The shift from classical to modern logic ↑ Burris (2015) §4. Jevons... ↑ Grattan-Guinness (1991) cited in O'Connor and Robertson (2000) ↑ Boyer (1968) p. 636. See De Morgan laws for a modern formal statement of these laws. See Duality principle for a general discussion of mutual substitution of logical operations in the formulas of formal logical and logical-objective languages. ↑ Encyclopædia Britannica "History of Logic § C S Peirce" emphasis added ↑ Tarski p. 73 ↑ Anellis (2012b) § 2 ↑ Peirce cited in Anellis (2012b) § 2 ↑ Anellis (2012a) p. 246 ↑ Anellis (2012a) pp. 252-253 ↑ Tarski pp. 73-74 ↑ Set theory ↑ Wikipedia "Naive set theory" ↑ Porubsky notes that the term naive set theory came into broad use in the 1960s following its use as the title of Halmos' text. ↑ Bolzano cited in Porubsky ↑ Bolzano §4 cited in Tait p. 2 ↑ Porubsky ↑ Tait p. 2 ↑ Bolzano §3 cited in Tait p. 3. Tait tempers this criticism of Bolzano's understanding, noting that both Cantor and Dedekind also avoided the null set -- "no whole has zero parts" -- and that "as late as 1930, Zermelo chose in his important paper [1930] on the foundations of set theory to axiomatize set theory without the null set." ↑ Bolzano §11 cited in Tait p. 3 ↑ Brown (2010) §"Naive Set vs. Axiomatic Set Theories" ↑ O'Connor and Robertson "A history of set theory" ↑ Bagaria §1 ↑ Ebbinghaus p. 298 cited in Porubsky ↑ Jones (1996) § The Formalization of Mathematics ↑ Bagaria (2014) §1 ↑ Ferreirós (2011b) §1 citing Ewald (1996) Vol. 2 ↑ Burris (1997) ↑ El Naschie (2015) ↑ El Naschie, M S. (2015) ↑ Bagaria (2014) ↑ Tait p. 5-6 ↑ Tait pp. 5-6 ↑ Wikipedia "Set theory" ↑ Halmos cited in Toida (2013) §"Naive Set Theory vs. Axiomatic Set Theory" ↑ Toida (2013) §"Naive Set Theory vs. Axiomatic Set Theory" ↑ Azzano p. 8 ↑ Gillies (1982) p. 8 ↑ Azzano p. 12 ↑ Reck (2011) 2.1 ↑ Frege (1893) cited in Gillies p. 51 ↑ Gillies p. 52 emphasis added ↑ Ferreirós (2011a) p. 6 ↑ Dedekind (1888) cited in Gillies pp. 52-58 ↑ Gillies pp. 52-53. ↑ Gillies notes that these two notions, basic to set theory, were not completely distinguished, both notationally and conceptually, until Peano did so in 1894. ↑ Gillies speculates that Dedekind's "certain reasons" for excluding the empty set arise from difficulties caused by his conflating $a \in S$ and $A \subseteq S$. ↑ Zermelo (1908) cited in Gillies pp. 52-58. Gillies notes that Dedekind's 1888 work "is the principal source" for Zermelo's 1908 paper, in which "Zermelo frequently refers to Dedekind." ↑ Jones § The Formalization of Mathematics ↑ Toida 4.1 Why Predicate Logic? ↑ Peirce pp. 194-195 cited in Moore p. 99 ↑ HTFB (2015) ↑ Mattey § Gottlob Frege ↑ Frege (1879) cited in Mattey § Gottlob Frege ↑ Harrison (1996) § The History of Formal Logic ↑ Math Stack Exch ↑ Sowa "Comments on Peirce's …" § 1. Historical Background ↑ Sowa § 1. Historical Background ↑ Sowa § 1. Historical Background -- emphasis added ↑ Bezhanishvil p. 1 ↑ Podnieks § 3.1 ↑ Kennedy (1963) ↑ Peirce (1881) p. 85 cited in Anellis (2012a) pp. 260-261. Anellis notes that, though Peirce uses the word "syllogistic" here, he was already translating syllogisms in his algebraic logic into implications using a conditional connective. ↑ Shields cited by Anellis (2012a) p. 259-260 ↑ O'Connor and Robertson (2002) "Frege" ↑ Frege (1884) cited in Demopoulos p. 7 ↑ Frege (1879) p. 136 cited in Gillies p. 71 emphasis added ↑ Frege (1884) § 2 cited in "Philosophical Summaries" emphasis added ↑ Frege (1884) § 4 cited in Demopoulos p. 5 emphasis added ↑ Frege (1884) cited in Gillies p. 46-48 ↑ Frege (1884) cited in Dietz ↑ Tait p.8 ↑ Frege (1892) ↑ Ferreirós (1996) pp. 18-19 ↑ Ferreirós (1996) pp. 18-19. Ferreirós notes (with surprise) that, in spite of its importance to naive set theory, the unrestricted principle of Comprehension was almost nowhere stated clearly before it was proved to be contradictory! ↑ Anellis (2012?) p. 260 ↑ Reck (2011) § 2.2 ↑ Gillies cited in Azzano p. 5 ↑ Dedekind (1888) pp. 99-100 cited in Awodey and Reck p. 8 ↑ Ferreirós (2011a) p. 19 ↑ Ferreirós (2011a) p. 19. Ferreirós restates Dedekind's conditions as follows: Condition 1. "$N$ is closed under the map $ϕ$"; Condition 3: "$N$ is the minimal closure of the unitary set $\{e\}$ under $ϕ$." ↑ Harrison (1996) ↑ Staub p. 96 ↑ Hardegree p. 11 n. 13 ↑ Kennedy (1974) p.41 ↑ Peano p. 102 cited in GIllies p. 66 ↑ Nidditch cited in Staub p. 96 ↑ Peano cited in Kennedy (2002) p. 40-41 ↑ Kennedy (2002) p. 42 ↑ Kennedy (2002) p. 8 ↑ See Peano axioms and Wikipedia § Peano axioms for modern formalizations of Peano's axioms. ↑ Pon p. 5 ↑ See Peano axioms and Wikipedia § Peano axioms for discussions of consistency and independence of the axioms. ↑ Stepanov Slide 31 ↑ Wang p. 145 cited in Podnieks p.93 ↑ Peano (1889) trans. by Kennedy p.103 cited in Gillies p. 66 ↑ Reck (2011) § 2.2. To further make his case, Reck notes, albeit parenthetically, that "Peano, who published his corresponding work in 1889, [one year after Dedekind published his conditions,] acknowledged Dedekind's priority." ↑ Staub p. 98 emphasis added ↑ Kennedy (2002) p. 41 emphasis added ↑ Joyce cited in Staub p. 98 ↑ Ferreirós (2011b) § 3 ↑ Lavine pp. 38, 41 cited in Curtis p. 87 ↑ Tait p. 11 ↑ Tait p. 18 emphasis added ↑ Nunez p. 1732 ↑ Tait pp. 18 ↑ Tait p. 19, note 14 ↑ Tait pp. 18-19 ↑ Weisstein "Ordinal Numbers" ↑ Set theory, Encyclopedia of Mathematics ↑ Lavine pp. 53-54 cited in Curtis pp. 87-88 ↑ Lavine p. 63 cited in Curtis ↑ Ferreirós (2011b) § 3. Ferreirós notes that Zermelo and others "believed that most of those paradoxes dissolved as soon as one worked within a restricted axiomatic system," which is to say a system in which mathematicians typically work. ↑ Lavine p. 144 cited in Curtis p. 88, notes that Zermelo subsequently developed the "iterative concept" of a set, on which view "no set can be a member of itself, which rules out the set of all sets not members of themselves," i.e. it rules out Russell's paradox. ↑ Kanamori p. 17 ↑ Kanamori p. 17. Kanamori notes that Zermelo agreed, quoting him as writing "if in set theory we confine ourselves to a number of established principles … that enable us to form initial sets and to derive new sets from given ones…, then all such contradictions can be avoided." ↑ Ferreiros (2001) pp. 443-444 ↑ Ferreiros (2011a) p. 6 ↑ Harrison (1996) § Rigour and the axiomatic method ↑ Nowlan "Moritz Pasch" ↑ Seidenberg (2008) cited in O'Connor and Robertson "Moritz Pasch" ↑ Toretti (2010) § 4 ↑ Toretti (2010) § 4. Toretti credits this version of the principle of duality to Gergonne (1825) and notes, "The same result is secured … by exchanging not the words, ['point' for 'line', etc.,] but their meanings. ↑ Moritz Pasch ↑ Boyer (1968) p. 654 ff. ↑ Corry p. 147 ↑ Venturi (2012) p. 12. Venturi comments (n. 37) that Hilbert made his choice of axiomatics as the method he would use to establish a sound basis for geometry even though, at the time, he "lacked the logical tools" to implement that method fully. ↑ Rothe p. 31 ↑ Hilbert (1899) p. 2 ↑ Hilbert pp. 2-3 ↑ Sterrett p. 1 ↑ Hilbert (1899) p. 1 cited in Venturi (2012) p. 3 ↑ Blanchette § 2 ↑ Venturi (2012) p. 3 emphasis added ↑ Venturi (2012) p. 18 ↑ Hilbert cited in Blanchette § 2 ↑ Blumenthal p. 402-403 cited in Corry (2011) p. 140 ↑ Rothe (2015) p. 35 ↑ Hilbert (1902). pp. 3-4. Axiom II.4 was discarded after it proved to be redundant. ↑ Hilbert (1902) pp. 4-5. Hilbert credits Pasch as having been the first to study these axioms and states that "Axiom II, 5 is in particular due to him." p. 3. n. 2 ↑ Richter pp. 3-4. The actual presentation is somewhat modified from Richter's original. ↑ Hilbert (1900) ↑ Hilbert in a letter to Klein cited in Corry p. 141 emphasis added ↑ Jahren p. 1 ↑ Rothe p. 36. See also p. 256, where Rothe notes that a "very strong" continuity axiom based on Dedekind's definition of the real numbers introduces the real numbers into geometry, and then comments, "which is not in the spirit of Euclid." ↑ Hartshorne p. 97 ↑ Rothe p. 364 ff. ↑ Corry pp. 140-141 ↑ Rothe p. 37. Rothe suggests, in view of what we know today about the limitations of such investigations, that only a person of Hilbert's great optimism could/would have addressed such questions at that time. ↑ Venturi (2012) p. 3 ↑ Hilbert (1905) cited in Corry p. 162 ↑ Hilbert (1898-1899) cited in Corry pp. 144-145 ↑ Corry p. 144-145 ↑ Hilbert (1902) § 2 ↑ Ferreirós (1996) p. 2 Ferreirós notes: "the first published formulation of the idea that mathematical existence can be derived from consistency" appeared in Hilbert's 1900 paper "Über den Zahlbegriff." This paper appeared immediately prior to the published version of his Problems Address. ↑ Hilbert (1900) § 2 emphasis added. ↑ Calude and Chaitin ↑ Pon Blumenthal, Otto. (1935). "Lebensgeschichte." (Hilbert 1932–1935, vol. 3, 387–429). Bolzano, B. (1851). Paradoxien des Unendlichen (ed. by F. Pryhonsky), Reclam'; [English translation by D. A. Steele, ''Paradoxes of the Infinite'', London: Routledge & Kegan Paul, 1950]. Boole, G. (1847, [1951]). The Mathematical Analysis of Logic, Being an Essay Towards a Calculus of Deductive Reasoning, Macmillan, Barclay, & Macmillan, [Reprinted Basil Blackwell]. Boole, G. (1854, [19158]). An Investigation of The Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, Macmillan, [Reprinte Dover]. Dedekind, R. (1888). Was sind und was sollen die Zahlen?, Vieweg, [English trans., (1901) "The Nature and Meaning of Numbers", in Essays on the Theory of Numbers, W.W. Beman, ed. and trans., Open Court Publishing Company]. Dedekind, R. (1930-32). Gesammelte Mathematische Werke, Vols. 1-3, R. Fricke et al., eds., Vieweg. De Morgan, A. (1847). Formal Logic: or, The Calculus of Inference, Necessary and Probable, Taylor and Walton. De Morgan, A. (1849). Trigonometry and Double Algebra, Taylor, Walton & Maberly. Frege, G. (1879) Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, ["Conceptual Notation …", English translation by T W Bynum, Oxford University Press, 1972]. Frege, G. (1884). Die Grundlagen der Arithmetik, [The Foundations of Arithmetic, English translation J L Austin, Basil Blackwell, 1968]. Frege, G. (1892) Uber Sinn und Bedeuting, ["On Sense and Reference," Translations from the Philosophical Writings of Gottlob Frege, Geach and Black (eds.) Blackwell, 1960, pp. 56-78]. Frege, G. (1893) Grundgesetze, [The Basic Laws of Arithmetic, English translation by M Firth, University of California, 1964]. Grassmann, H. (1861). Lehrbuch der Arithmetik, Enslin, Berlin. Hilbert, D. (1898–1899). Mechanik. Nachlass David Hilbert, (Cod. Ms. D. Hilbert, 553). Hilbert, D (1899) The Foundations of Geometry, English trans. Townsend, E J. (1902). The Open Court Publishing Co., URL: https://www.gutenberg.org/files/17384/17384-pdf.pdf, Accessed: 2015/08/16. Hilbert, D. (1900). "Über den Zahlbegriff," Jahresbericht der Deutschen, Mathematiker-Vereinigung 8, 180–184. (English translation in Ewald, W. (1996). From Kant to Hilbert: A source book in the foundations of mathematics, vol. 2, Oxford University Press. Hilbert, D.(1900). "Mathematische Probleme," Nachr. K. Ges. Wiss. Göttingen, Math.-Phys. Klasse (Göttinger Nachrichten) , 3 pp. 253–297 (Reprint: Archiv Math. Physik 3:1 (1901), 44-63; 213-237; also: Gesammelte Abh., dritter Band, Chelsea, 1965, pp. 290-329) Zbl 31.0068.03, URL: https://www.math.uni-bielefeld.de/~kersten/hilbert/rede.html, Accessed: 2015/06/03. Hilbert, D. (1902). "Mathematical problems," Bull. Amer. Math. Soc. , 8 pp. 437–479, MR1557926 Zbl 33.0976.07, (Reprint: ''Mathematical Developments Arising from Hilbert Problems'', edited by Felix Brouder, American Mathematical Society, 1976), URL: http://aleph0.clarku.edu/~djoyce/hilbert/problems.html, Accessed: 2015/06/03. Hilbert, D. (1905). Logische Principien des mathematischen Denkens. (Manuscript/Typescript of Hilbert Lecture Notes. Bibliothek des Mathematischen Instituts, Universität Göttingen, summer semester, 1905, annotated by E. Hellinger.) Jevons, W S. (1890). Pure Logic and Other Minor Works, Robert Adamson and Harriet A. Jevons (eds), Lennox Hill Pub. & Dist. Co. [Reprinted 1971]. Ladd-Franklin, C. (1883). "On the algebra of logic," Studies in logic by the members of the Johns Hopkins University (C. S. Peirce, editor), Little, Brown, Boston, pp. 17-71. Pasch, M., 1882. Vorlesungen über neueren Geometrie, Leipzig: Teubner. Peacock, G. (1830). Treatise on Algebra. Peano, G. (1889). Arithmetices principia nova methodo exposita. [English translation in Kennedy, H C. (1973). Selected Works of Giuseppe Peano, Allen & Unwin. pp. 101-134.] Peirce, C S. (1881). "On the Logic of Number." American Journal of Mathematics, Vol. 4, pp. 85-95, 1881. Schröder, E. (1895). Algebra und Logik der Relativ. Zermelo, I. (1908). "Investigations in the Foundations of Set Theory," [English translation in van Heijenoort, J. (1967) From Frege to Godel, Harvard Univ. Press, pp. 199-215]. Anellis, I H. (2012a). "How Peircean was the 'Fregean' Revolution' in Logic?", URL: http://iph.ras.ru/uplfile/logic/log18/LI-18_Anellis.pdf, Accessed: 2015/08/07. Anellis, I H. (2012b) "Peirce's truth functional analysis and the origin of the truth table," History and Philosophy of Logic, Vol. 33, No. 1, pp. 87–97, 2012, URL: http://arxiv.org/pdf/1108.2429.pdf, Accessed: 2015/08/07. Awodey, S. and Reck, E.H. (2001). "Completeness and Categoricity: 19th Century Axiomatics to 21st Century Semantics," URL:http://www.hss.cmu.edu/philosophy/techreports/118_awodey.pdf, Accessed: 2015/06/19. Azzano, L. (2014). "What are Numbers?," URL: https://www.academia.edu/8596012/Dedekind_and_Frege_on_the_introduction_of_natural_numbers, Accessed: 2015/07/02. Bezhanishvili, G. "Peano Arithmetic," Project designed for upper level undergraduate course in Mathematical logic, New Mexico State University, URL: http://www.cs.nmsu.edu/historical-projects/Projects/121220100522peano-revised-1.pdf, Accessed: 2015/06/19. Blanchette, P. (2012). "The Frege-Hilbert Controversy", The Stanford Encyclopedia of Philosophy (Spring 2014 Edition), Edward N. Zalta (ed.), URL: http://plato.stanford.edu/archives/spr2014/entries/frege-hilbert/, Accessed: 2015/08/16. Bochenski, I M. (1956,[1961]) Formale Logik, North Holland, [English translation A History of Formal Logic, trans by Ivo Thomas, University of Notre Dame Press]. Boyer, C.B. (1939). The Concepts of the Calculus. A Critical and Historical Discussion of the Derivative and the Integral, Columbia University Press, vii + 346 pages, URL: http://catalog.hathitrust.org/Record/000165835 Accessed: 2015/06/29. Boyer, C B. (1968) A History of Mathematics, John Wiley & Sons, URL: https://archive.org/stream/AHistoryOfMathematics/Boyer-AHistoryOfMathematics_djvu.txt, Accessed: 2015/06/29. Brown, R G. (2010). Axioms, URL: http://www.phy.duke.edu/~rgb/Philosophy/axioms/axioms/, Accessed: 2015/07/16. Burris, S. (1997). "Set theory: Cantor," Supplementary Text Topics, URL: https://www.math.uwaterloo.ca/~snburris/htdocs/scav/cantor/cantor.html, Accessed: 2015/07/16. Burris, S. (2014). "Examples Applying Boole's Algebra of Logic," The Stanford Encyclopedia of Philosophy (Winter 2014 Edition), URL: http://plato.stanford.edu/archives/win2014/entries/boole/examples.html, Accessed: 2015/07/16. Burris, S. and Legris, J. (2015). "The Algebra of Logic Tradition," The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/spr2015/entries/algebra-logic-tradition/, Accessed: 2015/07/16. Calude, C.S. and Chaitin, G.J. (1999). "Mathematics / Randomness everywhere, 22 July 1999," Nature, Vol. 400, News and Views, pp. 319-320, URL: https://www.cs.auckland.ac.nz/~chaitin/nature.html, Accessed: 2015/06/19. Chaitin, G. (2000). "A Century of Controversy Over the Foundations of Mathematics," Journal Complexity -- Special Issue: Limits in mathematics and physics, Vol. 5, No. 5, May-June 2000, pp. 12-21, (Originally published in Finite Versus Infinite: Contributions to an Eternal Dilemma, Calude, C. S.; Paun, G. (eds.); Springer-Verlag, London, 2000, pp. 75–100), URL: http://www-personal.umich.edu/~twod/sof/assignments/chaitin.pdf Accessed 2015/05/30. Corry, L. (2006). "The Origin of Hilbert's Axiomatic Method," Jürgen Renn et al (eds.). The Genesis of General Relativity, Vol. 4 Theories of Gravitation in the Twilight of Classical Physics: The Promise of Mathematics and the Dream of a Unified Theory, Springer (2006), pp. 139-236. URL: http://www.tau.ac.il/~corry/publications/articles/pdf/Hilbert%20Kluwer.pdf, Accessed: 2015/08/16. Curtis, G N. (1995). "The 'Villain' of Set Theory," [Review of Lavine, S. (1994). Understanding the Infinite, Harvard Univ. Press], Russell: The Journal of Bertrand Russell Studies, Vol. 15, No. 1, 1995, McMaster University, URL: https://escarpmentpress.org/russelljournal/article/viewFile/1882/1908, Accessed: 2015/07/16. Dasgupta, A. (2014). Set Theory: With an Introduction to Real Point Sets, DOI 10.1007/978-1-4614-8854-5__2, © Springer Science+Business Media New York 2014, URL: http://www.springer.com/us/book/9781461488538, Accessed: 2015/06/19. Dietz, A. "Frege: The Foundations of Arithmetic (1884)," Philosophy Summaries, URL: https://sites.google.com/site/philosophysummaries/personal/previous-classes/500/frege-the-foundations-of-arithmetic-1884, Accessed: 2015/08/14. Ebbinghaus, H., et al. (1992: 3rd improved printing). Zahlen. Springer-Verlag. Edwards, J.T. (editor) "Christine Ladd-Franklin," Notable American women: A biographical dictionary 1607-1950, Vol. II, Belknap Press of Harvard University Press, Cambridge, MA, 1971. El Naschie, M. (2015). "George Cantor: The Father of Set Theory," URL: http://www.msel-naschie.com/work-george-cantor.cfm, Accessed: 2015/07/16. Ewald, W B. (1996). From Kant to Hilbert: A source book in the foundations of mathematics, 2 vols., Oxford University Press. Ewald, W B. (2002) Review of Grattan-Guinness, I. (2000) The Search for Mathematical Roots, 1870-1940, Princeton University Press. Bull. (New Series) Amer. Math. Soc., Vol. 40, No. 1, pp. 125–129. Ferreirós, J. (1996). "Hilbert, Logicism, and Mathematical Existence," URL: http://personal.us.es/josef/HLME.pdf, Accessed: 2015/06/11. Ferreirós, J. (2001). "The Road to Modern Logic -- An Interpretation," ''The Bulletin of Symbolic Logic'', Vol. 7, No. 4, Dec. 2001, URL: http://personal.us.es/josef/BSL0704-001.pdf, Accessed: 2015/07/13. Ferreirós, J. (2011a). "On Dedekind's Logicism," in Arana and Alvarez, eds., Analytic Philosophy and the Foundations of Mathematics, Palgrave, URL: http://www.rehseis.cnrs.fr/IMG/pdf/jose_f_Dedekinds_logicism.pdf, Accessed: 2015/07/04. Ferreirós, J. (2011b) "The Early Development of Set Theory," Stanford Encyclopedia of Philosophy (Winter 2012 Edition), URL: http://plato.stanford.edu/archives/win2012/entries/settheory-early/, Accessed: 2015/07/17. Gillies, D A. (1982). Frege, Dedekind, and Peano on the Foundations of Arithmetic, Van Gorcum & Comp., URL: http://www.thatmarcusfamily.org/philosophy/Course_Websites/Readings/Gillies%20-%20Frege%20Dedekind%20and%20Peano.pdf, Accessed: 2015/06/19. Grattan-Guinness, I. (1974). The rediscovery of the Cantor-Dedekind correspondence, Jahresbericht der Deutschen Mathematiker-Vereinigung 76 #2-3 (30 December 1974), 104-139, URL: http://catalog.hathitrust.org/Record/000165835 Accessed: 2015/06/30. Grattan-Guinness, I. (1991). "Boole y su semi-seguidor jevons," ["Boole and his semi-follower Jevons"], in 2nd International Colloquium on Philosophy and History of Mathematics, Mexico City, Mathesis Mexico, Vol. 7. No. 3, (1991), pp. 351-362. Halmos, P. (1960) Naive Set Theory, van Nostrand, [Reprinted 1974 by Springer], URL: http://sistemas.fciencias.unam.mx/~lokylog/images/stories/Alexandria/Logica%20y%20Conjuntos/Paul%20R.Halmos%20-%20Naive%20Set%20Theory.pdf, Accessed: 2015/08/04. Hardegree, G. "Arithmetic," URL: http://people.umass.edu/gmhwww/382/pdf/07-arithmetic.pdf, Accessed: 2015/06/19. Harrison, J. (1996). Formalized Mathematics, URL: http://www.rbjones.com/rbjpub/logic/jrh0100.htm, Accessed: 2015/07/13. Hodges, W. (2015) "The influence of Augustus De Morgan," URL: http://wilfridhodges.co.uk/history22.pdf, Accessed: 2015/07/17. HTFB (2015). "How is first-order logic derived from the natural numbers?" Math Stack Exch URL: http://math.stackexchange.com/questions/1198092/how-is-first-order-logic-derived-from-the-natural-numbers, Accessed: 2015/06/11. Jahren, B. (2010). "Hilbert's Axiom System for Plane Geometry: A Short Introduction," URL: http://folk.uio.no/bjoernj/kurs/4510/hilberteng.pdf, Accessed: 2015/08/16. Jones, R B. (1996). Mathematics, URL: http://www.rbjones.com/rbjpub/maths/index.htm, Accessed: 2015/07/13. Joyce, D. (2005). The Dedekind/Peano Axioms, Clark University Press. Kanamori, A. (2012). "In Praise of Replacement," Bull. Symbolic Logic Vol. 18, No. 1 (2012), pp. 46-90. Kanamori, A. (2007). "Set Theory from Cantor to Cohen," Handbook of the Philosophy of Science, Volume: Philosophy of Mathematics, Vol. ed: Andrew Irvine, 2007, Elsevier BV, [A revision with significant changes of "The Mathematical Development of Set Theory from Cantor to Cohen," The Bulletin of Symbolic Logic, Vol. 2, 1996, pp. 1–71], URL:http://math.bu.edu/people/aki/16.pdf, Accessed 2015/10/02. Kennedy, H C. (2002). Twelve Articles on Giuseppe Peano, Peremptory Publications, p. 8, URL: http://hubertkennedy.angelfire.com/TwelveArticles.pdf, Accessed: 2012/06/11. Kirschner, S. (2013). "Nicole Oresme", The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), URL: http://plato.stanford.edu/archives/fall2013/entries/nicole-oresme/, Accessed: 2015/06/25. Mattey, G J. (2009). "History of Predicate Logic," Course material for Intermediate Symbolic Logic, Winter 2012, UC Davis, URL: http://hume.ucdavis.edu/mattey/phi112/history.html, Accessed: 2015/07/13. Moore, G H. (1988). The Emergence of First-Order Logic, University of Minnesota Press, URL: http://mcps.umn.edu/philosophy/11_4Moore.pdf, Accessed: 2015/06/11. Netz, R. "Methods of Infinity," The Archimedes Palimpsest, URL: http://archimedespalimpsest.org/about/scholarship/method-infinity.php, Accessed: 2015/06/22. Netz, R, Saito, K, and Tchernetska, N. (2001). "A new reading of Method Proposition 14 : preliminary evidence from the Archimedes palimpsest. I, SCIAMVS 2 (2001), 9-29. Nidditch, P. (1963). "Peano and the Recognition of Frege," Mind, Vol. 72 (1963) pp. 103-110. Nowlan, R A. "Moritz Pasch," Chronicle of Mathematical People, URL: http://www.robertnowlan.com/pdfs/Pasch,%20Moritz.pdf, Accessed: 2015/08/22. Nunez, R E. (2003). "Creating mathematical infinities: Metaphor, blending, and the beauty of transfinite cardinals," Journal of Pragmatics, Vol. 37, (2005), pp. 1717–1741, URL: http://www.cogsci.ucsd.edu/~nunez/web/TransfinitePrgmtcs.pdf, Accessed: 2015/010/02. O'Connor, J J and Robertson, E F. (1996), "A history of set theory," MacTutor History of Mathematics archive, URL:http://www-history.mcs.st-andrews.ac.uk/HistTopics/Beginnings_of_set_theory.html Accessed: 2015/06/10. O'Connor, J J and Robertson, E F. (1996), "Augustus De Morgan," MacTutor History of Mathematics archive, URL: http://www-history.mcs.st-and.ac.uk/Biographies/DeMorgan.html Accessed: 2015/06/22. O'Connor, J J. and Robertson, E F. (1997) "An Overview of the History of Mathematics", MacTutor History of Mathematics archive, URL: http://www-history.mcs.st-andrews.ac.uk/HistTopics/History_overview.html Accessed: 2015/06/18. O'Connor, J J and Robertson, E F. (1998). "Georg Cantor," MacTutor History of Mathematics archive, URL: http://www-history.mcs.st-and.ac.uk/Biographies/Cantor.html Accessed: 2015/06/30. O'Connor, J J and Robertson, E F. (2002), "Friedrich Ludwig Gottlob Frege," MacTutor History of Mathematics archive, URL: :http://www-history.mcs.st-andrews.ac.uk/HistTopics/Infinity.html Accessed: 2015/06/10. O'Connor, J J and Robertson, E F. (2002), "Infinity," MacTutor History of Mathematics archive, URL: :http://www-history.mcs.st-andrews.ac.uk/HistTopics/Infinity.html Accessed: 2015/06/10. O'Connor, J J and Robertson, E F. (2004), "George Boole," MacTutor History of Mathematics archive, URL: http://www-history.mcs.st-and.ac.uk/Biographies/Boole.html Accessed: 2015/06/22. O'Connor, J J and Robertson, E F. (2005), "Bernard Bolzano," MacTutor History of Mathematics archive, URL: http://www-history.mcs.st-and.ac.uk/Biographies/Bolzano.html Accessed: 2015/06/20. O'Connor, J J and Robertson, E F. (2015), "George Peacock," MacTutor History of Mathematics archive, URL: http://www-history.mcs.st-and.ac.uk/Biographies/Peacock.html Accessed:2015/07/09. Podnieks, K. (2015). What is Mathematics?, URL: https://dspace.lu.lv/dspace/bitstream/handle/7/1442/Podnieks_What_is_Mathematics_Goedel.pdf?sequence=1, Accessed: 2015/07/07. Pon, S. (2003). "Hilbert's Second Problem: Foundations of Arithmetic," Undergraduate paper for Math 163: History of Mathematics, U.C. San Diego, URL: https://www.math.ucsd.edu/programs/undergraduate/history_of_math_resource/history_papers/math_history_05.pdf, Accessed: 2015/06/09. Porubsky, S. "Naïve Set Theory." Interactive Information Portal for Algorithmic Mathematics, Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic, URL: http://www.cs.cas.cz/portal/AlgoMath/Foundations/SetTheory/NaiveSetTheory.htm, Accessed: 2015/07/16. Reck, E. (2011). "Dedekind's Contributions to the Foundations of Mathematics", The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL: http://plato.stanford.edu/archives/win2012/entries/dedekind-foundations/. Accessed: 2015/06/20. Reck, E. (2013). "Frege, Dedekind, and the Origins of Logicism," History and Philosophy of Logic, Vol. 34, No. 3, pp. 242-265, URL: http://dx.doi.org/10.1080/01445340.2013.806397, Accessed: 2015/07/04. Renfro, D. (2014). "Did Galileo's writings on infinity influence Cantor?," History of Science and Mathematics Stack Exchange, URL: http://hsm.stackexchange.com/questions/451/did-galileos-writings-on-infinity-influence-cantor, Accessed: 2014/06/24. Richter, W. (____). "Formalizing Rigourous Hilbert Axiomatic Geometry Proofs in the Proof Assistant HOL LIGHT," URL: http://www.math.northwestern.edu/~richter/hilbert.pdf, Accessed: 2015/08/16. Rothe, F. (2015). "Hilbert's Axioms of Geometry," Several Topics from Geometry (A project under construction), URL: http://math2.uncc.edu/~frothe/3181all.pdf, Accessed: 2015/08/16. Russell, B. (1945, [1972]) A History of Western Philosophy, Simon & Schuster, Inc. Russinoff, I S. (1999). "The Syllogism's Final Solution," The Bulletin of Symbolic Logic, Vol. , No.4 Dec. 1999, URL: https://people.math.ethz.ch/~halorenz/4students/Literatur/RussinoffSyllo.pdf, Accessed: 2015/08/07. Seidenberg, A. (2008). "Pasch, Moritz." Complete Dictionary of Scientific Biography, :http://www.encyclopedia.com/doc/1G2-2830903301.html, Accessed: 2015/08/24. Shen, E. (1927). "The Ladd-Franklin formula in logic: The antilogism," Mind, Vol. 36, pp. 54-60. Sowa, J F. (2010?). "On the Algebra of Logic: Comments on Peirce's Paper of 1885," URL: http://www.jfsowa.com/peirce/csp1885.HTM, Accessed: 2015/08/07, see also URL: http://www.jfsowa.com/peirce/ms514.htm. Spalt, D.D. (1990). "Die Unendlichkeiten bei Bernard Bolzano," Konzepte des mathematisch Unendlichen im 19. Jahrhundert, Göttingen, 189-218. Staub, C. "Peano's Arithmetic," The Proceedings of GREAT Day, (2014): 96-99. (Rev. ed.) URL: https://ojs.geneseo.edu/index.php/great/article/download/1743/1216, Accessed: 2015/06/15. Stepanov, A A. (2012). "Successors of Peano," Lecture slides from Three Algorithmic Journeys, URL: http://www.stepanovpapers.com/Journeys/Journey3.pdf, Accessed: 2015/06/11. Sterrett, S G. (1994). "Frege and Hilbert on the Foundations of Geometry," URL: http://philsci-archive.pitt.edu/723/1/SterrettFregeHilbert1994.pdf, Accessed: 2015/08/16. Tait, W W. "Cantor's Grundlagen and the Paradoxes of Set Theory," URL: http://home.uchicago.edu/~wwtx/cantor.pdf, Accessed: 2015/07/16. Tarski, Alfred. (1941). "On the Calculus of Relations," The Journal of Symbolic Logic, Vol. 6, No. 3 (Sep., 1941), pp. 73-89, Association for Symbolic Logic, URL: http://www.jstor.org/stable/2268577, Accessed: 2015/07/09. Toida, S. (2013). "Naive Set Theory vs. Axiomatic Set Theory," Discrete Structures/Discrete Mathematics, URL: http://www.cs.odu.edu/~toida/nerzic/content/set/naive_v_axiomatic.html, Accessed: 2015/07/14. Toida, S. (2009). Discrete Structures/Discrete Mathematics Web Course Material, URL: http://www.cs.odu.edu/~toida/nerzic/content/web_course.html, Accessed: 2015/07/29. Toretti, R. (2010) "Nineteenth Century Geometry," The Stanford Encyclopedia of Philosophy (Winter 2014 Edition), Edward N. Zalta (ed.), URL: http://plato.stanford.edu/archives/win2014/entries/geometry-19th/, Accessed: 2015/08/20. van Benthem, J. (2011) "Natural Logic, Past and Future" CSLI Stanford, Amsterdam and Stanford, http://staff.science.uva.nl URL: http://web.stanford.edu/~icard/logic&language/2011.NatLog.pdf, Accessed: 2015/07/09. van Benthem, J, et. al. (2014) Logic in Action, URL: http://www.logicinaction.org/docs/lia.pdf, Accessed: 2015/07/24. Venturi, G. (2011). "Hilbert, completeness and geometry", Rivista Italiana di Filosofia Analitica Junior, 2(2), 80-102, 2011, URL: http://riviste.unimi.it/index.php/rifanalitica/article/view/1497/1711, Accessed: 2015/08/17. Venturi, G. (2012) "The concept of axiom in Hilbert's thought," URL: https://www.academia.edu/833273/The_concept_of_axiom_in_Hilberts_thought, Accessed: 2015/08/17. Waterhouse, W.C. (1979). "Gauss on infinity," Historia Math. Vol. 6, Issue 4, November 1979, pp. 430-436. Weisstein, Eric W. "Ordinal Number." From MathWorld--A Wolfram Web Resource. URL: http://mathworld.wolfram.com/OrdinalNumber.html, Accessed: 2015/10/02. Wikipedia "Naive set theory," URL: https://en.wikipedia.org/w/index.php?title=Naive_set_theory&oldid=663408254, Accessed: 2015/07/16. Wikipedia "Set theory," URL: https://en.wikipedia.org/w/index.php?title=Set_theory&oldid=672427095, Accessed: 2015/07/16. Hilbert 2nd problem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hilbert_2nd_problem&oldid=39714 Retrieved from "https://encyclopediaofmath.org/index.php?title=Hilbert_2nd_problem&oldid=39714"
CommonCrawl
Excel Ranges and Formulas While not nearly as powerful as a statistical computing environment like R, Excel offers the advantage of being able to be found in just about every work environment. As such, knowing how to do statistical calculations and simulations in Excel can be very useful. Rather than using named variables to store values to be used later, like one might see in R or any other programming languages, Excel stores values it needs for subsequent calculations in cells. Each cell is part of a particular worksheet and has a unique row/column combination in that sheet. The row is indicated by a positive integer and the column is indicated by a letter (or sequence of letters if more than 26 letters need be used). An example worksheet is shown below. Notice in the sheet above cells B3, B4, and B5 contain numerical values, while cells A3 and A7 contain text. There are other things that cells can contain as well. B7 looks like it contains a value, but clicking on it reveals that the content of the cell is actually a formula, and the value shown is simply the output of that formula. In this case, the formula finds the sum of the range of all cells between B3 and B5, inclusive and is given by: "=SUM(B3:B5)". This formula can be seen by selecting cell B7 and then looking at the text-edit box with a $fx$ to its left in the tool bar at the the top. Importantly, one should know that all formulas begin with "=", followed by some expression. The expression B3:B5 in the formula above specifies a range of cells. Using ranges allows one to efficiently make calculations even if they involve large amounts of data. In general, a range takes the form of two cell addresses separated by a colon (":"). These two cells are typically the upper left and lower right cells of some rectangular collection of cells, although they may also be the lower left and upper right cells of the same. The use of "SUM()" in the aforementioned formula specifies what function to apply to the range(s) of cells it is given as arguments inside the parentheses to the right of the function's name. Excel has a huge number of built-in functions that can be used to a great variety of ends. To see what it has to offer, click the "$fx$" button in the toolbar at top to bring up the formula builder dialog box. Clicking on a particular function name brings up more information on what the corresponding function calculates, the proper syntax for using it (i.e., how to type it correctly), and a link to even more information on the function. The left side of the worksheet shows an example of using the AVERAGE() function on the range of cells given by B3:F5 Formulas can also reference individual cell values, as the right side of the worksheet above demonstrates. If we clicked on cell K5, we would see "=2/(1/K3+1/K4)", the formula for the harmonic mean of the values seen in cells K3 and K4. When using mathematical expressions in a formula in Excel, the standard order of operations applies. Consequently, given that we must enter these expressions using a single line of text, one needs to be careful. A very common mistake is forgetting to group numerators, denominators, or even some exponents with parentheses, when their inclusion is necessary. Autofill and Anchors There are a couple of "tricks" in Excel that one can use to quickly flesh out a range of cells... The first is that Excel supports a relatively intelligent "autofill". Suppose you wanted to quickly create a column of cells from 1 to 10. Simply type the values for the first two cells, and then select these cells, noting the small square "handle" at the bottom right corner of that selection. Want the sequence $3,5,7,9,\ldots$ instead? Simply start with 3 and 5 as your first two cells, and drag down as before -- Excel will figure out what you want and fill the cells accordingly. This process works for any linear sequence. Another trick that can really save you time when filling a range of cells is the use of an "anchor". To set the stage for understanding what an anchor is, consider the following... The RANDBETWEEN() function generates a random integer between the two arguments it is provided. Suppose we enter the formula "=RANDBETWEEN(1,10)" in cell C2 to generate a random integer between $1$ and $10$ in this cell, and then copy the content of this cell to all of the cells in range C3:H5 to create six columns of random numbers similar to what is shown below. Now suppose we wish to sum the various columns. If we enter "=SUM(C2:C5)" in cell C7 to sum the numbers in column C, and copy that formula to cells in the range D7:H7, we get correct column totals for each column. This works because when one copies a formula to another cell, the cell references are updated in a relative way. As a simple example of this relative addressing -- if a formula in cell A1 references B1, the cell to its immediate right, and is copied to cell J5 for instance, then the former reference to B1 is changed by Excel to reference the cell K5, the cell to J5's immediate right. Anchors allow us to change this default behavior when copying formulas to other cells, so that anything that has been "anchored" doesn't get updated due to the change in relative position. As an example, suppose one wanted to generate 10 simulations of a game that one had a 30% chance of winning. They also wanted their work to be flexible enough so that it could be easily adapted to a different chance of winning. We start by putting the text of "p = " in cell A1, followed by the value 0.30 in B1 to indicate the probability of winning the game. We also put the text "x" and "x < p?" in cells D1 and D2, respectively, as column headers for the part coming up... Then, noting that the RAND() function generates a random value between $0$ and $1$, we enter "=RAND()" in cell D2, select it, and use the handle to autofill this formula to the 9 cells below it. To use these numbers to simulate a win or loss, in cell E2, we enter "=(D2<$B$1)". This will return a "value" of TRUE or FALSE depending on whether or not the value of cell D2 is less than that in B1, as appropriate. In this way, every TRUE produced represents a win and every FALSE represents a loss. You may be wondering about the presence of the two dollar signs in the $B$1 just used. This is the "anchor" previously mentioned. When we copy the formula in cell E2 to lower cells, we want to make a comparison between the value of the cell to the immediate left of the cell in column E and in its same row (relative addressing), and the value of the cell B1, regardless of which row in E we are on. The presence of the dollar signs essentially says: "When this formula is copied to another cell, the row or column references after each dollar sign won't be changed." Using them in this way is known as absolute addressing. With all of the cells making comparisons with the single cell B1, if one wished to change the probability of winning the game to 50%, all one would need do is change the value in that cell to 0.5. The reason we need two dollar signs instead of a single one in our formula, is that Excel lets one anchor the rows and columns independently. The presence of the dollar sign in front of the row value anchors only the row, while the presence of the dollar sign in front of the column letter(s) anchors only the column. This independence between row and column anchors can be particularly helpful when creating a table of values that depend on two inputs. For example, consider creating a simple multiplication table. We fill cell C2 and D2 with values 1 and 2 and autofill to the right to create the column headers of our table. In a similar manner we create row headers in the range B3:B12. Then, to create the body of the table, we simply type "=$B3*C$2" in cell C3 and copy it to the rest of the cells in the range C3:L12. (Think carefully about why this works!) As one more trick -- suppose you wish to copy a cell to a large range. You can of course copy the cell and then manually select the large range, but this might require a lot of scrolling if the range involves thousands of rows or columns. Instead, one can select a large group of cells for copying, pasting, or other needs, by going to the "Edit" menu, and selecting "Find : Go To" from there. At that point, enter the range that you wish to select in the blank marked "Reference" and click "OK". The large range is now selected! Note: Ctrl-G or F5 can be used is a keyboard shortcut to bring up the "Go To" dialog box, to speed up the selection process even further. Recursively Defined Sequences Usefully, one can use formulas to fill a range with a recursively defined sequence. For example, the Fibonacci sequence $1, 2, 3, 5, 8, 13, \ldots$ is the sequence that begins with $1, 1$, with later terms each being the sum of the two terms that precede it. Notice how easy it is to create this sequence in Excel, as shown below. Here, we first enter 1 in cells B2 and B3, and then enter the formula "=B2+B3" in cell B4. Then -- letting relative addressing do its work -- either copy or autofill this formula to the cells below it, to obtain the sequence shown.
CommonCrawl
From formulasearchengine Template:Views 3D illustration of a stereographic projection from the north pole onto a plane below the sphere In geometry, the stereographic projection is a particular mapping (function) that projects a sphere onto a plane. The projection is defined on the entire sphere, except at one point: the projection point. Where it is defined, the mapping is smooth and bijective. It is conformal, meaning that it preserves angles. It is neither isometric nor area-preserving: that is, it preserves neither distances nor the areas of figures. Intuitively, then, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. Because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection; it finds use in diverse fields including complex analysis, cartography, geology, and photography. In practice, the projection is carried out by computer or by hand using a special kind of graph paper called a stereographic net, shortened to stereonet or Wulff net. 2 Definition 4 Wulff net 5 Other formulations and generalizations 6 Applications within mathematics 6.1 Complex analysis 6.2 Visualization of lines and planes 6.3 Other visualization 6.4 Arithmetic geometry 6.5 Tangent half-angle substitution 7 Applications to other disciplines 7.1 Cartography 7.2 Crystallography 7.3 Geology 7.4 Photography 9.1 Sources Illustration by Rubens for "Opticorum libri sex philosophis juxta ac mathematicis utiles", by François d'Aiguillon. It demonstrates how the projection is computed. The stereographic projection was known to Hipparchus, Ptolemy and probably earlier to the Egyptians. It was originally known as the planisphere projection.[1] Planisphaerium by Ptolemy is the oldest surviving document that describes it. One of its most important uses was the representation of celestial charts.[1] The term planisphere is still used to refer to such charts. It is believed that the earliest existing world map created by Gualterious Lud of St Dié, Lorraine, in 1507 is based upon the stereographic projection, mapping each hemisphere as a circular disk.[2] The equatorial aspect of the stereographic projection, commonly used for maps of the Eastern and Western Hemispheres in the 17th and 18th centuries (and 16th century - Jean Roze 1542; Rumold Mercator 1595),[3] was utilised by the ancient astronomers like Ptolemy.[4] François d'Aiguillon gave the stereographic projection its current name in his 1613 work Opticorum libri sex philosophis juxta ac mathematicis utiles (Six Books of Optics, useful for philosophers and mathematicians alike).[5] In 1695, Edmond Halley, motivated by his interest in star charts, published the first mathematical proof that this map is conformal.[6] He used the recently established tools of calculus, invented by his friend Isaac Newton. Stereographic projection of the unit sphere from the north pole onto the plane z = 0, shown here in cross section This section focuses on the projection of the unit sphere from the north pole onto the plane through the equator. Other formulations are treated in later sections. The unit sphere in three-dimensional space R3 is the set of points (x, y, z) such that x2 + y2 + z2 = 1. Let N = (0, 0, 1) be the "north pole", and let M be the rest of the sphere. The plane z = 0 runs through the center of the sphere; the "equator" is the intersection of the sphere with this plane. For any point P on M, there is a unique line through N and P, and this line intersects the plane z = 0 in exactly one point P'. Define the stereographic projection of P to be this point P' in the plane. In Cartesian coordinates (x, y, z) on the sphere and (X, Y) on the plane, the projection and its inverse are given by the formulas (X,Y)=(x1−z,y1−z),{\displaystyle (X,Y)=\left({\frac {x}{1-z}},{\frac {y}{1-z}}\right),} (x,y,z)=(2⁢X1+X2+Y2,2⁢Y1+X2+Y2,−1+X2+Y21+X2+Y2).{\displaystyle (x,y,z)=\left({\frac {2X}{1+X^{2}+Y^{2}}},{\frac {2Y}{1+X^{2}+Y^{2}}},{\frac {-1+X^{2}+Y^{2}}{1+X^{2}+Y^{2}}}\right).} In spherical coordinates (φ, θ) on the sphere (with φ the zenith angle, 0 ≤ φ ≤ π, and θ the azimuth, 0 ≤ θ ≤ 2 π) and polar coordinates (R, Θ) on the plane, the projection and its inverse are (R,Θ)=(sin⁡φ1−cos⁡φ,θ)=(cot⁡φ2,θ),{\displaystyle (R,\Theta )=\left({\frac {\sin \varphi }{1-\cos \varphi }},\theta \right)=\left(\cot {\frac {\varphi }{2}},\theta \right),} (φ,θ)=(2⁢arctan⁡(1R),Θ).{\displaystyle (\varphi ,\theta )=\left(2\arctan \left({\frac {1}{R}}\right),\Theta \right).} Here, φ is understood to have value π when R = 0. Also, there are many ways to rewrite these formulas using trigonometric identities. In cylindrical coordinates (r, θ, z) on the sphere and polar coordinates (R, Θ) on the plane, the projection and its inverse are (R,Θ)=(r1−z,θ),{\displaystyle (R,\Theta )=\left({\frac {r}{1-z}},\theta \right),} (r,θ,z)=(2⁢R1+R2,Θ,R2−1R2+1).{\displaystyle (r,\theta ,z)=\left({\frac {2R}{1+R^{2}}},\Theta ,{\frac {R^{2}-1}{R^{2}+1}}\right).} The stereographic projection defined in the preceding section sends the "south pole" (0, 0, −1) of the unit sphere to (0, 0), the equator to the unit circle, the southern hemisphere to the region inside the circle, and the northern hemisphere to the region outside the circle. The projection is not defined at the projection point N = (0, 0, 1). Small neighborhoods of this point are sent to subsets of the plane far away from (0, 0). The closer P is to (0, 0, 1), the more distant its image is from (0, 0) in the plane. For this reason it is common to speak of (0, 0, 1) as mapping to "infinity" in the plane, and of the sphere as completing the plane by adding a "point at infinity". This notion finds utility in projective geometry and complex analysis. On a merely topological level, it illustrates how the sphere is homeomorphic to the one point compactification of the plane. In Cartesian coordinates a point P(x, y, z) on the sphere and its image P′(X, Y) on the plane either both are rational points or none of them: P∈Q3⟺P′∈Q2{\displaystyle P\in {\mathbb {Q}}^{3}\iff P'\in {\mathbb {Q}}^{2}} A Cartesian grid on the plane appears distorted on the sphere. The grid lines are still perpendicular, but the areas of the grid squares shrink as they approach the north pole. A polar grid on the plane appears distorted on the sphere. The grid curves are still perpendicular, but the areas of the grid sectors shrink as they approach the north pole. Stereographic projection is conformal, meaning that it preserves the angles at which curves cross each other (see figures). On the other hand, stereographic projection does not preserve area; in general, the area of a region of the sphere does not equal the area of its projection onto the plane. The area element is given in (X, Y) coordinates by d⁢A=4(1+X2+Y2)2⁢d⁢X⁢d⁢Y.{\displaystyle dA={\frac {4}{(1+X^{2}+Y^{2})^{2}}}\;dX\;dY.} Along the unit circle, where X2 + Y2 = 1, there is no infinitesimal distortion of area. Near (0, 0) areas are distorted by a factor of 4, and near infinity areas are distorted by arbitrarily small factors. The metric is given in (X, Y) coordinates by 4(1+X2+Y2)2⁢(d⁢X2+d⁢Y2),{\displaystyle {\frac {4}{(1+X^{2}+Y^{2})^{2}}}\;(dX^{2}+dY^{2}),} and is the unique formula found in Bernhard Riemann's Habilitationsschrift on the foundations of geometry, delivered at Göttingen in 1854, and entitled Über die Hypothesen welche der Geometrie zu Grunde liegen. No map from the sphere to the plane can be both conformal and area-preserving. If it were, then it would be a local isometry and would preserve Gaussian curvature. The sphere and the plane have different Gaussian curvatures, so this is impossible. The conformality of the stereographic projection implies a number of convenient geometric properties. Circles on the sphere that do not pass through the point of projection are projected to circles on the plane. Circles on the sphere that do pass through the point of projection are projected to straight lines on the plane. These lines are sometimes thought of as circles through the point at infinity, or circles of infinite radius. All lines in the plane, when transformed to circles on the sphere by the inverse of stereographic projection, intersect each other at infinity. Parallel lines, which do not intersect in the plane, are tangent at infinity. Thus all lines in the plane intersect somewhere in the sphere—either transversally at two points, or tangently at infinity. (Similar remarks hold about the real projective plane, but the intersection relationships are different there.) File:Riemann Sphere.jpg The sphere, with various loxodromes shown in distinct colors The loxodromes of the sphere map to curves on the plane of the form R=eΘ/a,{\displaystyle R=e^{\Theta /a},\,} where the parameter a measures the "tightness" of the loxodrome. Thus loxodromes correspond to logarithmic spirals. These spirals intersect radial lines in the plane at equal angles, just as the loxodromes intersect meridians on the sphere at equal angles. The stereographic projection relates to the plane inversion in a simple way. Let P and Q be two points on the sphere with projections P' and Q' on the plane. Then P' and Q' are inversive images of each other in the image of the equatorial circle if and only if P and Q are reflections of each other in the equatorial plane. In other words, if: P is a point on the sphere, but not a 'north pole' N and not its antipode, the 'south pole' S, P' is the image of P in a stereographic projection with the projection point N and P" is the image of P in a stereographic projection with the projection point S, then P' and P" are inversive images of each other in the unit circle. △N⁢O⁢P′∼△P′′⁢O⁢S⟹O⁢P′:O⁢N=O⁢S:O⁢P′′⟹O⁢P′⋅O⁢P′′=r2{\displaystyle \triangle NOP^{\prime }\sim \triangle P^{\prime \prime }OS\implies OP^{\prime }:ON=OS:OP^{\prime \prime }\implies OP^{\prime }\cdot OP^{\prime \prime }=r^{2}} Wulff net Wulff net or stereonet, used for making plots of the stereographic projection by hand Stereographic projection plots can be carried out by a computer using the explicit formulas given above. However, for graphing by hand these formulas are unwieldy; instead, it is common to use graph paper designed specifically for the task. To make this graph paper, one places a grid of parallels and meridians on the hemisphere, and then stereographically projects these curves to the disk. The result is called a stereonet or Wulff net (named for the Russian mineralogist George (Yuri Viktorovich) Wulff [7]). In the figure, the area-distorting property of the stereographic projection can be seen by comparing a grid sector near the center of the net with one at the far right of the net. The two sectors have equal areas on the sphere. On the disk, the latter has nearly four times the area as the former; if one uses finer and finer grids on the sphere, then the ratio of the areas approaches exactly 4. On the Wulff net, the images of the parallels and meridians intersect at right angles. This orthogonality property is a consequence of the angle-preserving property of the stereoscopic projection. (However, the angle-preserving property is stronger than this property; not all projections that preserve the orthogonality of parallels and meridians are angle-preserving.) Illustration of steps 1–4 for plotting a point on a Wulff net For an example of the use of the Wulff net, imagine that we have two copies of it on thin paper, one atop the other, aligned and tacked at their mutual center. Suppose that we want to plot the point (0.321, 0.557, −0.766) on the lower unit hemisphere. This point lies on a line oriented 60° counterclockwise from the positive x-axis (or 30° clockwise from the positive y-axis) and 50° below the horizontal plane z = 0. Once these angles are known, there are four steps: Using the grid lines, which are spaced 10° apart in the figures here, mark the point on the edge of the net that is 60° counterclockwise from the point (1, 0) (or 30° clockwise from the point (0, 1)). Rotate the top net until this point is aligned with (1, 0) on the bottom net. Using the grid lines on the bottom net, mark the point that is 50° toward the center from that point. Rotate the top net oppositely to how it was oriented before, to bring it back into alignment with the bottom net. The point marked in step 3 is then the projection that we wanted. To plot other points, whose angles are not such round numbers as 60° and 50°, one must visually interpolate between the nearest grid lines. It is helpful to have a net with finer spacing than 10°; spacings of 2° are common. To find the central angle between two points on the sphere based on their stereographic plot, overlay the plot on a Wulff net and rotate the plot about the center until the two points lie on or near a meridian. Then measure the angle between them by counting grid lines along that meridian. Two points P1 and P2 are drawn on a transparent sheet tacked at the origin of a Wulff net. The transparent sheet is rotated and the central angle is read along the common meridian to both points P1 and P2. Other formulations and generalizations Stereographic projection of the unit sphere from the north pole onto the plane z = −1, shown here in cross section Some authors[8] define stereographic projection from the north pole (0, 0, 1) onto the plane z = −1, which is tangent to the unit sphere at the south pole (0, 0, −1). The values X and Y produced by this projection are exactly twice those produced by the equatorial projection described in the preceding section. For example, this projection sends the equator to the circle of radius 2 centered at the origin. While the equatorial projection produces no infinitesimal area distortion along the equator, this pole-tangent projection instead produces no infinitesimal area distortion at the south pole. Other authors[9] use a sphere of radius {{ safesubst:#invoke:Unsubst||$B=1/2}} and the plane z = −{{ safesubst:#invoke:Unsubst||$B=1/2}}. In this case the formulae become (x,y,z)→(ξ,η)=(x12−z,y12−z),{\displaystyle (x,y,z)\rightarrow (\xi ,\eta )=\left({\frac {x}{{\frac {1}{2}}-z}},{\frac {y}{{\frac {1}{2}}-z}}\right),} (ξ,η)→(x,y,z)=(ξ1+ξ2+η2,η1+ξ2+η2,−1+ξ2+η22+2⁢ξ2+2⁢η2).{\displaystyle (\xi ,\eta )\rightarrow (x,y,z)=\left({\frac {\xi }{1+\xi ^{2}+\eta ^{2}}},{\frac {\eta }{1+\xi ^{2}+\eta ^{2}}},{\frac {-1+\xi ^{2}+\eta ^{2}}{2+2\xi ^{2}+2\eta ^{2}}}\right).} Stereographic projection of a sphere from a point Q onto the plane E, shown here in cross section In general, one can define a stereographic projection from any point Q on the sphere onto any plane E such that E is perpendicular to the diameter through Q, and E does not contain Q. As long as E meets these conditions, then for any point P other than Q the line through P and Q meets E in exactly one point P′, which is defined to be the stereographic projection of P onto E.[10] All of the formulations of stereographic projection described thus far have the same essential properties. They are smooth bijections (diffeomorphisms) defined everywhere except at the projection point. They are conformal and not area-preserving. More generally, stereographic projection may be applied to the n-sphere Sn in (n + 1)-dimensional Euclidean space En + 1. If Q is a point of Sn and E a hyperplane in En + 1, then the stereographic projection of a point P ∈ Sn − {Q} is the point P′ of intersection of the line Q⁢P¯{\displaystyle \scriptstyle {\overline {QP}}} with E. Still more generally, suppose that S is a (nonsingular) quadric hypersurface in the projective space Pn + 1. By definition, S is the locus of zeros of a non-singular quadratic form f(x0, ..., xn + 1) in the homogeneous coordinates xi. Fix any point Q on S and a hyperplane E in Pn + 1 not containing Q. Then the stereographic projection of a point P in S − {Q} is the unique point of intersection of Q⁢P¯{\displaystyle \scriptstyle {\overline {QP}}} with E. As before, the stereographic projection is conformal and invertible outside of a "small" set. The stereographic projection presents the quadric hypersurface as a rational hypersurface.[11] This construction plays a role in algebraic geometry and conformal geometry. Applications within mathematics Although any stereographic projection misses one point on the sphere (the projection point), the entire sphere can be mapped using two projections from distinct projection points. In other words, the sphere can be covered by two stereographic parametrizations (the inverses of the projections) from the plane. The parametrizations can be chosen to induce the same orientation on the sphere. Together, they describe the sphere as an oriented surface (or two-dimensional manifold). This construction has special significance in complex analysis. The point (X, Y) in the real plane can be identified with the complex number ζ = X + iY. The stereographic projection from the north pole onto the equatorial plane is then ζ=x+i⁢y1−z,{\displaystyle \zeta ={\frac {x+iy}{1-z}},} (x,y,z)=(2⁢R⁢e⁢(ζ)1+ζ¯⁢ζ,2⁢I⁢m⁢(ζ)1+ζ¯⁢ζ,−1+ζ¯⁢ζ1+ζ¯⁢ζ).{\displaystyle (x,y,z)=\left({\frac {2\mathrm {Re} (\zeta )}{1+{\bar {\zeta }}\zeta }},{\frac {2\mathrm {Im} (\zeta )}{1+{\bar {\zeta }}\zeta }},{\frac {-1+{\bar {\zeta }}\zeta }{1+{\bar {\zeta }}\zeta }}\right).} Similarly, letting ξ = X − iY be another complex coordinate, the functions ξ=x−i⁢y1+z,{\displaystyle \xi ={\frac {x-iy}{1+z}},} (x,y,z)=(2⁢R⁢e⁢(ξ)1+ξ¯⁢ξ,2⁢I⁢m⁢(ξ)1+ξ¯⁢ξ,1−ξ¯⁢ξ1+ξ¯⁢ξ).{\displaystyle (x,y,z)=\left({\frac {2\mathrm {Re} (\xi )}{1+{\bar {\xi }}\xi }},{\frac {2\mathrm {Im} (\xi )}{1+{\bar {\xi }}\xi }},{\frac {1-{\bar {\xi }}\xi }{1+{\bar {\xi }}\xi }}\right).} define a stereographic projection from the south pole onto the equatorial plane. The transition maps between the ζ- and ξ-coordinates are then ζ = 1 / ξ and ξ = 1 / ζ, with ζ approaching 0 as ξ goes to infinity, and vice versa. This facilitates an elegant and useful notion of infinity for the complex numbers and indeed an entire theory of meromorphic functions mapping to the Riemann sphere. The standard metric on the unit sphere agrees with the Fubini–Study metric on the Riemann sphere. Visualization of lines and planes Animation of Kikuchi lines of four of the eight <111> zones in an fcc crystal. Planes edge-on (banded lines) intersect at fixed angles. The set of all lines through the origin in three-dimensional space forms a space called the real projective plane. This space is difficult to visualize, because it cannot be embedded in three-dimensional space. However, one can "almost" visualize it as a disk, as follows. Any line through the origin intersects the southern hemisphere z ≤ 0 in a point, which can then be stereographically projected to a point on a disk. Horizontal lines intersect the southern hemisphere in two antipodal points along the equator, either of which can be projected to the disk; it is understood that antipodal points on the boundary of the disk represent a single line. (See quotient topology.) So any set of lines through the origin can be pictured, almost perfectly, as a set of points in a disk. Also, every plane through the origin intersects the unit sphere in a great circle, called the trace of the plane. This circle maps to a circle under stereographic projection. So the projection lets us visualize planes as circular arcs in the disk. Prior to the availability of computers, stereographic projections with great circles often involved drawing large-radius arcs that required use of a beam compass. Computers now make this task much easier. Further associated with each plane is a unique line, called the plane's pole, that passes through the origin and is perpendicular to the plane. This line can be plotted as a point on the disk just as any line through the origin can. So the stereographic projection also lets us visualize planes as points in the disk. For plots involving many planes, plotting their poles produces a less-cluttered picture than plotting their traces. This construction is used to visualize directional data in crystallography and geology, as described below. Other visualization Stereographic projection is also applied to the visualization of polytopes. In a Schlegel diagram, an n-dimensional polytope in Rn + 1 is projected onto an n-dimensional sphere, which is then stereographically projected onto Rn. The reduction from Rn + 1 to Rn can make the polytope easier to visualize and understand. Arithmetic geometry The rational points on a circle correspond, under stereographic projection, to the rational points of the line. In elementary arithmetic geometry, stereographic projection from the unit circle provides a means to describe all primitive Pythagorean triples. Specifically, stereographic projection from the north pole (0,1) onto the x-axis gives a one-to-one correspondence between the rational number points (x,y) on the unit circle (with y ≠ 1) and the rational points of the x-axis. If (m/n, 0) is a rational point on the x-axis, then its inverse stereographic projection is the point (2⁢m⁢nn2+m2,n2−m2n2+m2){\displaystyle \left({\frac {2mn}{n^{2}+m^{2}}},{\frac {n^{2}-m^{2}}{n^{2}+m^{2}}}\right)} which gives Euclid's formula for a Pythagorean triple. Tangent half-angle substitution {{#invoke:main|main}} The pair of trigonometric functions (sin x, cos x) can be thought of as parametrizing the unit circle. The stereographic projection gives an alternative parametrization of the unit circle: cos⁡x=t2−1t2+1,sin⁡x=2⁢tt2+1.{\displaystyle \cos x={\frac {t^{2}-1}{t^{2}+1}},\quad \sin x={\frac {2t}{t^{2}+1}}.} Under this reparametrization, the length element dx of the unit circle goes over to d⁢x=2⁢d⁢tt2+1.{\displaystyle dx={\frac {2\,dt}{t^{2}+1}}.} This substitution can sometimes simplify integrals involving trigonometric functions. Applications to other disciplines Stereographic projection is used to map the Earth, especially near the poles, but also near other points of interest. Stereographic projection of the world north of 30°S. 15° graticule. Rumold Mercator's map Joan Blaeu's map The fundamental problem of cartography is that no map from the sphere to the plane can accurately represent both angles (and thus shapes) and areas. In general, area-preserving map projections are preferred for statistical applications, while angle-preserving (conformal) map projections are preferred for navigation. Stereographic projection falls into the second category. When the projection is centered at the Earth's north or south pole, it has additional desirable properties: It sends meridians to rays emanating from the origin and parallels to circles centered at the origin. The stereographic is the only projection that maps all small circles to circles. This property is valuable in planetary mapping when craters are typical features. Crystallography A crystallographic pole figure for the diamond lattice in [111] direction In crystallography, the orientations of crystal axes and faces in three-dimensional space are a central geometric concern, for example in the interpretation of X-ray and electron diffraction patterns. These orientations can be visualized as in the section Visualization of lines and planes above. That is, crystal axes and poles to crystal planes are intersected with the northern hemisphere and then plotted using stereographic projection. A plot of poles is called a pole figure. In electron diffraction, Kikuchi line pairs appear as bands decorating the intersection between lattice plane traces and the Ewald sphere thus providing experimental access to a crystal's stereographic projection. Model Kikuchi maps in reciprocal space,[12] and fringe visibility maps for use with bend contours in direct space,[13] thus act as road maps for exploring orientation space with crystals in the transmission electron microscope. Use of lower hemisphere stereographic projection to plot planar and linear data in structural geology, using the example of a fault plane with a slickenside lineation Researchers in structural geology are concerned with the orientations of planes and lines for a number of reasons. The foliation of a rock is a planar feature that often contains a linear feature called lineation. Similarly, a fault plane is a planar feature that may contain linear features such as slickensides. These orientations of lines and planes at various scales can be plotted using the methods of the Visualization of lines and planes section above. As in crystallography, planes are typically plotted by their poles. Unlike crystallography, the southern hemisphere is used instead of the northern one (because the geological features in question lie below the Earth's surface). In this context the stereographic projection is often referred to as the equal-angle lower-hemisphere projection. The equal-area lower-hemisphere projection defined by the Lambert azimuthal equal-area projection is also used, especially when the plot is to be subjected to subsequent statistical analysis such as density contouring. Spherical panorama of Paris projected using the stereographic projection Some fisheye lenses use a stereographic projection to capture a wide angle view.[14] Compared to more traditional fisheye lenses which use an equal-area projection, areas close to the edge retain their shape, and straight lines are less curved. However, stereographic fisheye lenses are typically more expensive to manufacture.[15] Image remapping software, such as Panotools, allows the automatic remapping of photos from an equal-area fisheye to a stereographic projection. The stereographic projection has been used to map spherical panoramas. This results in effects known as a little planet (when the center of projection is the nadir) and a tube (when the center of projection is the zenith).[16] The popularity of using stereographic projections to map panoramas over other azimuthal projections is attributed to the shape preservation that results from the conformality of the projection.[16] {{#invoke:Portal|portal}} List of map projections Poincaré disk model, the analogous mapping of the hyperbolic plane ↑ 1.0 1.1 Snyder (1993). ↑ According to (Snyder 1993), although he acknowledges he did not personally see it ↑ Snyder (1989). ↑ Brown, Lloyd Arnold : The story of maps, p.59. ↑ According to (Elkins, 1988) who references Eckert, "Die Kartenwissenschaft", Berlin 1921, pp 121–123 ↑ Timothy Feeman. 2002. "Portraits of the Earth: A Mathematician Looks at Maps". American Mathematical Society. ↑ Wulff, George, Untersuchungen im Gebiete der optischen Eigenschaften isomorpher Kristalle: Zeits. Krist.,36, l–28 (1902) ↑ Cf. Apostol (1974) p. 17. ↑ Template:Harvnb ↑ Cf. Pedoe (1988). ↑ Cf. Shafarevich (1995). ↑ M. von Heimendahl, W. Bell and G. Thomas (1964) Applications of Kikuchi line analyses in electron microscopy, J. Appl. Phys. 35:12, 3614–3616. ↑ P. Fraundorf, Wentao Qin, P. Moeck and Eric Mandell (2005) Making sense of nanocrystal lattice fringes, J. Appl. Phys. 98:114308. ↑ Samyang 8 mm Template:F/3.5 Fisheye CS ↑ Template:Cite web ↑ 16.0 16.1 German et al. (2007). Template:Refbegin {{#invoke:citation/CS1|citation |CitationClass=book }} |CitationClass=citation }} |CitationClass=conference }} |CitationClass=journal }} |CitationClass=book }} Template:Refend Template:Sister Template:Refbegin Time Lapse Stereographic Projection Weisstein, Eric W., "Stereographic projection", MathWorld. Planetmath.org Table of examples and properties of all common projections, from radicalcartography.net Three dimensional Java Applet Stereographic Projection and Inversion from cut-the-knot Examples of miniplanet panoramas, majority in UK Examples of miniplanet panoramas, majority in Czech Republic Examples of miniplanet panoramas, majority in Poland DoITPoMS Teaching and Learning Package- "The Stereographic Projection" Sphaerica software is capable of displaying spherical constructions in stereographic projection Proof about Stereographic Projection taking circles in the sphere to circles in the plane Free and open source python program for stereographic projection ---PTCLab Template:Refend Template:Map Projections Retrieved from "https://en.formulasearchengine.com/index.php?title=Stereographic_projection&oldid=224404" Commons category with local link different than on Wikidata Cartographic projections Conformal mapping Projective geometry About formulasearchengine
CommonCrawl
TMRS: an algorithm for computing the time to the most recent substitution event from a multiple alignment column Hisanori Kiryu ORCID: orcid.org/0000-0003-3554-53531, Yuto Ichikawa2 & Yasuhiro Kojima1 As the number of sequenced genomes grows, researchers have access to an increasingly rich source for discovering detailed evolutionary information. However, the computational technologies for inferring biologically important evolutionary events are not sufficiently developed. We present algorithms to estimate the evolutionary time (\(t_{\text {MRS}}\)) to the most recent substitution event from a multiple alignment column by using a probabilistic model of sequence evolution. As the confidence in estimated \(t_{\text {MRS}}\) values varies depending on gap fractions and nucleotide patterns of alignment columns, we also compute the standard deviation \(\sigma\) of \(t_{\text {MRS}}\) by using a dynamic programming algorithm. We identified a number of human genomic sites at which the last substitutions occurred between two speciation events in the human lineage with confidence. A large fraction of such sites have substitutions that occurred between the concestor nodes of Hominoidea and Euarchontoglires. We investigated the correlation between tissue-specific transcribed enhancers and the distribution of the sites with specific substitution time intervals, and found that brain-specific transcribed enhancers are threefold enriched in the density of substitutions in the human lineage relative to expectations. We have presented algorithms to estimate the evolutionary time (\(t_{\text {MRS}}\)) to the most recent substitution event from a multiple alignment column by using a probabilistic model of sequence evolution. Our algorithms will be useful for Evo-Devo studies, as they facilitate screening potential genomic sites that have played an important role in the acquisition of unique biological features by target species. As sequenced genomes continue to accumulate, a very rich source for discovering detailed evolutionary information grows. The UCSC genome browser provides multiple genome alignments for 100 vertebrate species, including humans (the multiz100way track) [1,2,3]. In previous decades, multiple DNA alignments are often used to reconstruct species trees and ancestral nucleotide states [4] and many algorithms and softwares are developed for such purposes. Some of the most used algorithms include Neighbor-Joining algorithm [5] and maximal likelihood method [4] and Bayesian Markov chain Monte Carlo method [6]. These algorithms usually assume evolutionary models that each nucleotide stochastically mutates over evolutionary time, and output the most consistent phylogenetic tree from possible \((2n-3)!!\) rooted or \((2n-5)!!\) unrooted trees for n-species. On the other hand, since the species tree of 100 vertebrates of multiz100way are basically resolved from the previous studies [7], finding functional genomic sites rather than determining the phylogenetic tree is becoming more important application as the use of multiple genomic alignments in recent years. As it is difficult to visually inspect functional regions from 100-species alignments, computing genome-wide summary statistics is very important. Measuring the strength of negative or positive selection is among the most popular analyses for screening functional regions of genomes [8,9,10,11,12,13,14]. These statistics are computed using probabilistic models that model the stochastic processes of DNA mutations along phylogenetic species trees, which are used in tree reconstruction [4,5,6], and detect genomic regions that show smaller or larger mutation rates using likelihood ratio tests or similar probabilistic computations. Such statistics have advantages over simpler statistics that do not assume a particular evolutionary model, such as nucleotide frequency of alignment columns and pairwise mismatch rates. By using a phylogenetic tree, we can appropriately count the number of ancestral mutations that are widespread within extant species. Further, stochastic processes can account for multiple nucleotide mutations whose effects are not negligible when we study evolutionarily distant species. However, only conservation/divergence measures are not sufficient to extract all evolutionarily important events from potential \(4^{100}\) nucleotide patterns of a 100-species alignment column. In this study, we develop algorithms to compute three statistics, \(t_{\text {MRS}}\), \(\sigma\), and q, for each column of a multiple genome alignment based on an evolutionary model that is similar to those described above. \(t_{\text {MRS}}\) is the evolutionary time to the most recent substitution event that occurred along the lineage of a given target species in the phylogenetic tree. Since the confidence in estimated \(t_{\text {MRS}}\) values varies markedly among alignment columns depending on gap fractions and complexity of nucleotide patterns (see Fig. 1 for explanation), we also compute the standard deviation \(\sigma\) of \(t_{\text {MRS}}\). Further, we compute the probability q that there is no mutation in the target lineage because the estimated \(t_{\text {MRS}}\) value has no meaning in such cases. By filtering out sites with non negligible probability of nucleotide conservation over the entire target lineage based on q, we can remove highly conserved sites. By comparing \(t_{\text {MRS}}\) with speciation time points, we can categorize sites by the groups of species that share mutation effects with the target species. Such detailed information is difficult to obtain from conservation measures. Our algorithms can be a very useful tool for screening the genomic sites that may have been involved in the acquisition of unique biological features by target species. In the next section, we describe our algorithms to compute \(t_{\text {MRS}}\) and data processing procedures. We first explain the \(t_{\text {MRS}}\) algorithm on a single edge of phylogenetic tree, and then generalize it to account for the entire tree. The algorithms for computing \(\sigma\) and q are described in Additional file 1 as they are very similar to that of \(t_{\text {MRS}}\). In the result section, we empirically show the correctness of our algorithms by posterior sampling of mutation history. We also show that our algorithm is fast enough to be applied to the entire human genome, and that \(t_{\text {MRS}}\) statistic is very different from other statistics to detect evolutionary conservation/divergence of genomic sites. We then apply our algorithms to the multiz100way dataset and investigate distributions of \(t_{\text {MRS}}\) in different genomic contexts. In particular, we investigate the correlation between \(t_{\text {MRS}}\) distribution on the bidirectionally transcribed enhancers and tissue specificity of enhancer activities and found that brain-specific transcribed enhancers are threefold enriched in the density of \(t_{\text {MRS}}\) that located in the human lineage. We first derive formulas for \(t_{\text {MRS}}\) and other variables for an edge of a phylogenetic tree, and then describe how to generalize them into statistics for the entire phylogenetic tree. Single edge case A continuous-time Markov model for nucleotide sequence evolution can be defined by a differential equation that determines the time evolution of the probability of observing each nucleotide: $$\frac{\partial}{{\partial t}}p (a\left| {b,t} \right.) = \sum\limits_{i \in {\text{Nuc}}} {{R_{ai}}} p(i\left| {b,t} \right.),p(a\left| {b,0} \right.) = {\delta _{ab}},$$ where p(a|b, t) represents the probability of observing base a at time t conditioned on base b being observed at time zero; \(\text {Nuc}=\{A,C,G,T\}=\{1,2,3,4\}\) represents the set of nucleotides; \(\delta _{ij}\) represents the Kronecker delta, which is 1 if \(i=j\) and is 0 otherwise; and \(R=\{R_{ij}\}\) represents the substitution rate matrix. The solution is given by a matrix exponential, which can be numerically computed by using the eigenvalue decomposition of rate matrix \(R=U\Lambda U^{-1}\) (\(\Lambda =\text {diag}(\lambda _1,\ldots ,\lambda _4)\)) as follows [15], $$\begin{aligned} p(a\left| {b,t} \right.) &=\left[ \exp (tR)\right] _{ab}, \exp (A)=1+\frac{A}{1!}+\frac{A^2}{2!}+\cdots \\&=U e^{t\Lambda } U^{-1}, e^{t\Lambda }=\text {diag}(e^{t\lambda _1},\ldots ,e^{t\lambda _4}) \end{aligned}$$ Similar to the scalar exponential function, a matrix exponential has an infinite product representation, $$\begin{aligned} \left[ \exp (tR)\right] _{ab}&=\lim _{N\rightarrow \infty }\left[ Q^N\right] _{ab}\\&=\lim _{N\rightarrow \infty }\sum _{X\in \Omega _N(a,b)}Q_{X_NX_{N-1}}\cdots Q_{X_1X_0}, \end{aligned}$$ where \(Q=(I+tR/N)\). The matrix Q satisfies the condition of a transition matrix of a discrete Markov process for sufficiently large N, and our formula for \(t_{\text {MRS}}\) can be derived via this connection to the discrete model. In the second equation, \(\Omega _N(a,b)\) is the set of all paths X along discrete time points \(0,\ldots ,N\) such that \(X=\{X_k\in \text {Nuc}|k=0,\ldots ,N,X_N=a,X_0=b\}\). Then, the summand of the second equation can be interpreted as the probability of substitution history \({\mathbb {P}}(X_N,X_{N-1},\dots ,X_1|X_0)\). In the discrete model, the random variable \(T_{\text {MRS}}\) that represents the time to the most recent substitution is given by $$\begin{aligned} T_{\text {MRS}}&=\sum _{l=1}^{N-1}\frac{lt}{N}{\mathbb{I}} \left(X_{N}=\cdots =X_{N-l}\right)\mathbb{I}\left(X_{N-l}\ne X_{N-l-1}\right)\\&\quad + \frac{Nt}{N} \mathbb{I} \left(X_{N}=\cdots =X_{0}\right) \\ &= \sum _{l=1}^{N}\frac{lt}{N}{\mathbb{I}}(X_{N}=\cdots =X_{N-l})\\&\quad - \sum _{l=1}^{N-1}\frac{lt}{N}{\mathbb{I}}\left(X_{N}=\cdots =X_{N-l-1}\right) \\&= \frac{t}{N}\sum _{l=1}^{N}{\mathbb{I}}\left(X_{N}=\cdots =X_{N-l}\right), \end{aligned}$$ where \({\mathbb{I}}(\cdot )\) is the indicator function. Note that in the first equation, we define \(T_{\text {MRS}}=t\) if path X has no substitution at all. In the second line, we used \({\mathbb{I}}(a\ne b)=1-{\mathbb{I}}(a=b)\), and the two terms in the second line mostly cancel out to give the third line. Then, the expected value \(t_{\text {MRS}}\) of \(T_{\text {MRS}}\) is given by $$\begin{aligned} t_{\text {MRS}}(a,b,t)&={\mathbb {E}}\left( T_{\text {MRS}}|a,b,t\right)\\& =\sum _{l=1}^{N}\frac{t}{N}{\mathbb {P}}(X_N=\cdots =X_{N-l}|a, b, t) \\&=\left. \sum _{l=1}^{N}\frac{t}{N}\left[ Q_D^{l}Q^{N-l}\right] _{ab}\big /\left[ Q^N\right] _{ab}\right. , \end{aligned}$$ where \(Q_D\) is the diagonal part of Q. In order to take the continuum limit (\(N\rightarrow \infty\)), we use formulas such as $$\begin{aligned} \sum _{l=1}^{N}\frac{1}{N}f\left( \frac{l}{N}\right)&\rightarrow \int _{0}^{1}f(s)ds \\ Q_D^{l}, Q^{N-l}&\rightarrow \exp (stR_D),\exp ((1-s)tR), (s=l/N), \end{aligned}$$ where \(R_D\) is the diagonal part of rate matrix R. By using these formulas, \(t_{\text {MRS}}\) can be computed using the following formulas $$\begin{aligned} t_{\text {MRS}}(a,b,t)&=\frac{t}{{\mathcal {Z}}}\left[ \int _{0}^{1}e^{stR_D}e^{(1-s)tR}ds\right] _{ab}\nonumber \\&=\frac{t}{{\mathcal {Z}}}\sum _{i=1}^{4} U_{ai}{U^{-1}}_{ib}{\mathcal {K}}(tR_{Daa},t\lambda _i)\nonumber \\ {\mathcal {Z}}&= \left[ e^{tR}\right] _{ab} = \sum _{i=1}^{4} U_{ai}e^{t\lambda _i}{U^{-1}}_{ib}\nonumber \\ {\mathcal {K}}(x, y)&=\int _{0}^{1}e^{sx}e^{(1-s)y}ds= {\left\{ \begin{array}{ll} \frac{e^x-e^y}{x - y} & \quad \text {if } x\ne y \\ e^x & \quad \text {if } x=y \end{array}\right. }, \end{aligned}$$ where \(R=U\Lambda U^{-1}\) and \(\Lambda =\text {diag}(\lambda _1,\ldots ,\lambda _4)\) is an eigenvalue decomposition of rate matrix R. The formulas for the standard deviation \(\sigma\) of \(T_{\text {MRS}}\) and probability q of no substitution can be derived in similar manners and given by $$\begin{aligned} \sigma (a,b,t)&=\sqrt{{\mathbb{E}}\left( T_{\text{MRS}}^2|a,b,t\right) - t_{\text{MRS}}^2(a,b,t)} \\ {\mathbb {E}}\left( T_{\text{MRS}}^2|a,b,t\right)&=\frac{2t}{{\mathcal{Z}}}\sum _{i=1}^{4} U_{ai}{U^{-1}}_{ib}\mathcal {K'}(tR_{Daa},t\lambda _i), {\mathcal{K'}}(x, y)=\frac{\partial {\mathcal{K}}(x, y)}{\partial x}\\ q(a,b,t)&=\frac{1}{{\mathcal{Z}}}e^{tR_{Daa}}\delta_{ab}. \end{aligned}$$ The derivation of each above formula is described in Additional file 1. Strand symmetric rate matrix Let \(a_c\) be the complementary nucleotide of nucleotide a. A rate matrix R is strand symmetric if it satisfies \(R_{a_cb_c}=R_{ab}\) for all \(a,b\in \text {Nuc}\) [16]. Strand non-symmetric rate matrices such as the general time reversible (GTR) model generally produce different posterior expectation values if we take the complement of an alignment column. Since there is no specific strand direction in intergenic regions and the existence of two different expectation values for a single genomic site complicates the downstream analyses, we use the most general, 6-parameter strand symmetric rate matrix. Table 1 shows the parametrization of rate matrices of strand symmetric model and GTR model. The parameters are optimized together with the edge lengths of the phylogenetic tree using the maximum likelihood method. We optimize the parameters using a LBFGSB gradient descent package [17], where we compute the gradient of likelihood function exactly using a inside-outside algorithms as described in Refs. [4, 18, 19]. Table 1 Rate parameters Time to most recent substitution \(t_\text{MRS }\). These schematically show the situations that may impact the confidence levels of inferred \(t_{\text {MRS}}\) values. The leaf nodes correspond to the target species are indicated by rectangles. In the left figure, we expect the last substitution occurred between node x and y, and \(t_{\text {MRS}}\) will be around \(t_1\) to \(t_1+t_2\). In the middle figure, the pattern of alignment column is not simple, and the state of node y can be either A or G. Therefore, the inferred \(t_{\text {MRS}}\) will have a large variance between \(t_1\) to \(t_1+t_2+t_3\). In the right figure, there is an ambiguous nucleotide in the column. In such cases, the inferred \(t_{\text {MRS}}\) value is the same as that inferred from only three species, and the confidence will accordingly be lower than when all four nucleotides are known Phylogenetic tree case To extend our algorithm to phylogenetic trees, we specify a target species that corresponds to a leaf node of a tree and consider the path from the leaf node to the root node. Each internal node along the path corresponds to the last common ancestor (concestor) [20] of the target species and some extant species. Let \({\mathcal {C}}={c_0,\ldots ,c_M}\) be the set of concestors with \(c_M\) being the root node and \(c_0\) being the leaf node of the target for convenience. Further, let \(s_i\) be the fraction of path length between the leaf and \(c_k\), let \(s_{kl}=(s_l-s_k)\), and let \({\bar{t}}\) be the total path length from the target leaf to the root. Then, the corresponding formula of Eq. 1 is obtained by dividing the integration range into sub-intervals between neighboring concestors and inserting the probabilities \(\{\gamma _k\}\) that emit partial alignment columns that are descendants of the sister branch of each concestor (see Fig. 2), Inside and outside variables. \(c_k\) denotes the concestor nodes on the target lineage. \(b_k\) denotes the sibling node of \(c_{k-1}\). \(\alpha (b_k,*)\) represents the inside variable, while \(\beta (c_k,*)\) represents the outside variable. \(\gamma (b_k,*)\) represents a dynamic programming variable in Eq. 2 in the main text $$\begin{aligned} t_{\text {MRS}}&=\frac{{\bar{t}}}{{\mathcal {Z}}(Y)}\sum _b\left[ \sum _{k=1}^{M}\int _{s_{k-1}}^{s_k}e^{s_{01}{\bar{t}}R_D}\gamma _1\right.\\&\quad\left.\cdots \left[ e^{(s-s_{k-1}){\bar{t}}R_D}e^{(s_{k}-s){\bar{t}}R}\right] \gamma _k\cdots e^{s_{M-1,M}{\bar{t}}R}ds\right] _{ab}\pi _b \nonumber \\ {\mathcal {Z}}(Y)&={\mathbb {P}}(Y) \nonumber \\ \gamma _k&=\text {diag}(\gamma (b_k, 1),\ldots ,\gamma (b_k, 4)) \nonumber \\ \gamma (b_k, i)&= \sum _j\alpha (b_k,j)p(j|i, t_{b_k}) \nonumber \\ \alpha (n,i)&={\mathbb {P}}(Y({\mathcal {L}}(n))|X_{n}=i), \end{aligned}$$ where \({\mathcal {Z}}(Y)\) represents the likelihood of alignment column Y, \(\pi\) represents the equilibrium distribution for rate matrix R, \(Y({\mathcal {L}}')\) represents the partial alignment column for a subset of leaf nodes \({\mathcal {L}}'\in {\mathcal {L}}\) ( \({\mathcal {L}}\): the set of all leaf nodes), \(a=Y(c_0)\), \(b_k\) represents the sibling node of \(c_{k-1}\) with parent node \(c_k\), \(t_n\) represents the edge length between node n and its parent node, \({\mathcal {L}}(n)\) the descendant leaves of node n, and \(X_{n}\) represents the random variable that represents the nucleotide type at node n. The inside variable \(\alpha (n,i)={\mathbb {P}}(Y({\mathcal {L}}(n))|X_{n}=i)\) represents the probability of emitting partial alignment column \(Y({\mathcal {L}}(n))\) given the state at node n is fixed to i. See Fig. 2 for the relations between tree nodes and dynamic programming variables. Because the range of integration is localized only in the k-th edge in the above equation, we can compute \(t_{\text {MRS}}\) using a dynamic programming algorithm (Algorithm 1). In Algorithm 1, \(p_D(j|i,t_{c_{k-1}})=\left[ \exp (tR_D)\right] _{ji}\) represents the probability of transition \(j\leftarrow i\) after time t without any substitution. \(\kappa (i,j)\) is defined by $$\begin{aligned} \kappa (i,j)={\bar{t}}\sum _l U_{il}{U^{-1}}_{lj}{\mathcal {K}}(s_{k-1,k}{\bar{t}}R_{Dii}, s_{k-1,k}{\bar{t}}\lambda _l). \end{aligned}$$ \(\beta (n,i)={\mathbb {P}}(Y({\mathcal {L}}\backslash {\mathcal {L}}(n)),X_{\text {Pa}(n)}=i)\) is called an outside variable and represents the probability of emitting alignment nucleotides other than the descendants \({\mathcal {L}}(n)\) of node n with a constraint that the state of the parent node \(\text {Pa}(n)\) is fixed to i. The inside and outside variables are computed by using the inside and outside algorithms [4, 19] resembling the use of forward-backward algorithms in linear hidden Markov models. \(\alpha _D(c_k,i)\) is the probability that emits the partial alignment column \(Y({\mathcal {L}}(n))\) with no substitution along the target lineage up to concestor node \(c_k\), given the state of node n is fixed to i. Similar algorithms can be derived for the standard deviation \(\sigma\) and the probability of no mutation q as described in Additional file 1. Alignment gaps and ambiguous characters We treat gap and ambiguous nucleotide characters of non-target leaves as missing characters; we sum the probabilities of all possible nucleotide patterns in computation. Then, the probability condition indicates that the estimated values are the same as those computed from the reduced phylogenetic tree and alignment columns after removal of gaps and ambiguous characters and the corresponding edges in the tree. This increases the standard deviation \(\sigma\) of estimates \(t_{\text {MRS}}\). On the other hand, we do not consider the sites if the character of the target is a gap or an ambiguous character. Software availability We implemented our algorithms in the C++ language. The resulting software ('TMRS') is available at our website [21]. Dataset and data processing We downloaded the MAF-formatted Multiz100way multiple alignment files from the UCSC genome browser site, which consists of multiple genome alignments of 100 vertebrate species, including the human genome version hg38. We also downloaded the phylogenetic tree data from the PhyloP track, whose edge lengths are trained using fourfold degenerate (4d) sites of RefSeq genes under the general time reversible model. We used the topology of the PhyloP phylogenetic tree as it is, and trained only the edge lengths of the tree as well as the rate parameters of the strand symmetric model. For this, we collected alignment columns at human 4d sites based on gene annotations of the RefGene track from the UCSC site, following Siepel et al. [8] and Pollard et al. [9]. The reason for using 4d sites is the higher quality of alignments and higher coverage of distant species in the alignments [8, 9], though they may be subject to various evolutionary constraints. In order to investigate the uncertainty of trained parameters, we randomly sampled 100 sets of 4d sites from about three million 4d sites in the human genome such that each has a given number of sites, ranging from 1 to \(10^5\). We generated an alignment of concatenated genomic alignment columns, and trained parameters based on the maximum likelihood method [22], using the LBFGS-B gradient descent package [17]. For studying differences in \(t_{\text {MRS}}\) distributions among genes, we sampled 100,000 alignment columns from intergenic, CDS, 3′UTR, and 5′UTR sequences based on 'Gencode v24 Basic' track gene models from the UCSC site [3]. Anderson et al. [23] identified genomic elements called transcribed enhancers in human and other genomes, where short RNAs are produced by bidirectional transcription as a result of chromatin openings. From the FANTOM5 enhancer atlas site [24], we downloaded the coordinates of transcribed enhancers and the list of tissue and cell specific enhancers where bidirectional transcription occurs in a tissue and/or cell-specific manner. Results and discussions Parameter optimization and performance tests We trained rate matrix \(\{R_{ij}\}\) and tree edge lengths \(\{t_k\}\) from genomic multiple alignment columns sampled from 4d sites. We trained 100 sets of parameters with random initial points from 100 sets of random-sampled alignment columns. Figure 3 (top left) shows the distributions of pairwise relative differences of trained parameters \(\theta = \left(\{R_{ij}\},\{t_k\} \right)\) for each number of alignment columns. Here, the relative difference between two parameters \(\theta _1\) and \(\theta _2\) is defined by \(|\theta _1 - \theta _2| / \text{max} (|\theta _1|, |\theta _2|)\) with |v| being the Euclidean norm. It shows the trained parameter converges very well as increasing the number of alignment columns. Figure 3 (top right) shows the Pearson correlation coefficient with the tree edge lengths provided in the PhyloP track of the UCSC genome browser, which was computed using the general time reversible model [9]. It shows concordant tree edge lengths (correlation coefficient \(> 0.9\)) are learned despite the differences in rate matrix models. Figure 3 (bottom) shows the distributions of the tree path lengths from the leaf node of humans to its concestor nodes using parameters trained with 100,000 alignment columns. As the variance among training sets is very small, we use their mean values as the times to concestors before the present and do not consider the widths of distributions. Table 2 shows the mean rate matrix and equilibrium distribution. The average transition-transversion rate ratio is about 2.7 in this model (see Section 6 in Additional file 1 for the computation). In the following results, we use 100 sets of parameters that are trained from 100,000 alignment columns and take averages of \(t_{\text {MRS}}\), \(\sigma\), and q computed for each parameter set. Convergence of optimized parameters. The upper left panel shows the distributions of the pairwise relative differences of inferred parameters. The x-axis represents the number of alignment columns used to train the parameters. The upper right panel represents the distribution of the correlation coefficients of tree edge lengths between the PhyloP model of the UCSC genome browser site and the inferred parameters. The x-axis is the same as that shown in the upper left panel. The bottom panel represents the distributions of inferred time to each concestor from the present. The unit is the number of substitutions per site. Each parameter set is trained using 100,000 alignment columns sampled from 4d sites Table 2 Trained rate matrix and equilibrium distribution In Fig. 4, we compared (\(t_{\text {MRS}}\), \(\sigma\), q) computed by our algorithms with the corresponding values obtained from posterior sampling of mutation histories along the phylogenetic tree in order to numerically check the correctness of our algorithms. It shows the relative errors between two values monotonically decrease as the sample size and the fineness of discretization increases. Numerical tests of our algorithms. The statistics \(t_{\text {MRS}}\), \(\sigma\), and q computed by exact algorithms were compared with those estimated using sampled histories of nucleotide substitutions. The y-axes represent the relative difference between the values from the exact algorithms and those obtained by approximate sampling algorithms. The x-axes show the dependency on the number of sampled histories and the number of discrete points in the phylogenetic tree from which the states were sampled Table 3 shows the runtimes of our C++ implementation. We used a single ES-2670 v3 2.3 GHz core as the computational platform. As the \(t_{\text {MRS}}\), \(\sigma\), and q values of each alignment columns are independently computable, our algorithms can deal with the entire human genome with reasonable time using a compute cluster. Table 3 Runtime of our implementation Comparison with other statistical measures To show the significance of our algorithms, we compared the accuracy with two possible methods of estimating \(t_{\text {MRS}}\) and q. The first method (termed 'reconstruction') uses the ancestral reconstruction. In this method, we first set the nucleotide state of each concestor node \(c_k\) to the base \(a_{c_k}\) with the maximal posterior probability: $$\begin{aligned} a_{c_k}&= \text {argmax}_{i}{\mathbb {P}}\left( X_{c_k}=i|Y\right) =\frac{1}{{\mathcal {Z}}(Y)}\sum _j\alpha (c_k, i)p(i|j,t_{c_k})\beta (\text {Pa}(c_k),j) \end{aligned}$$ Then, we return the middle point of the edge between nodes \(c_{k-1}\) and \(c_k\) as \(t_{\text {MRS}}\) where \(c_k\) is the most recent concestor whose reconstructed nucleotides differ from that of the target species \(a_{c_k}\ne Y(c_0)\). We set \(q=1\) if there is no such \(c_k\) and we set \(q=0\) otherwise. The second method (termed 'alignment') to infer \(t_{\text {MRS}}\) only considers nucleotides of extant species: we return the middle point of the edge between nodes \(c_{k-1}\) and \(c_k\) as \(t_{\text {MRS}}\) where \(c_k\) is the most recent concestor such that partial alignment column \(Y({\mathcal {L}}(c_k)\), which are descendants of \(c_k\), contain different nucleotide from the target nucleotides \(\exists a\in Y({\mathcal {L}}(c_k)), a\ne Y(c_0)\). Similarly to the 'reconstruction' method, We set \(q=1\) if there is no such \(c_k\) and we set \(q=0\) otherwise. To compare the accuracy of our algorithm with these approximate algorithms, we simulated evolutionary history and alignment column of base mutation using forward simulation using the phylogenetic model of the previous section. We masked nucleotide positions where there are gap or ambiguous characters in sampled multiz100way alignments in order to imitate the gap patterns of real alignments. Details of the simulation algorithm is described in Section 6 of Additional file 1. As a result, we obtained 100,000 alignment columns of 100 species with 'true' annotation of \(t_{\text {MRS}}\) and \(q\in \{0,1\}\). Figure 5a shows accuracies of predicting the absence of mutation along the target lineage. The x-axis is the fraction of positives in the dataset which was controlled by varying threshold of q. Since the 'reconstruction' and 'alignment' methods assign only binary q values, only a single point is plotted for each. The y-axis represents the ratio of false positives in all the positive predictions (i.e. False Discovery Rate, FDR). It shows that FDR monotonically decreases with decreasing q threshold, indicating the correctness of our algorithm for q. It also indicates that the accuracies of absence call of reconstruction and alignment methods are similar to that of our algorithm with positive fraction 0.5 and 1.0, respectively. Figure 5b shows the mean errors of predicted \(t_{\text {MRS}}\) relative to the total length of target lineage for each positive fraction. The error mostly decreases with stricter thresholds for our method, while reconstruction and alignment methods show more than 10% errors on average. Table 4 shows numerical values of FDR and mean error for several q threshold. Since the mean error of \(t_{\text {MRS}}\) is less than 5% of the total length of target lineage, we will use threshold \(q=0.01\) in the analyses in the following sections. Effect of filtering. We investigated the effect of filtering by q threshold on the accuracy of \(t_{\text {MRS}}\) estimates using simulation dataset. The x-axis represents the fraction of alignment columns remained by filtering with varying threshold. a Fraction of alignment columns that have no mutation throughout the target lineage in the positive set. b Mean % error of \(t_{\text {MRS}}\) values in the dataset after filtering. The blue and green points represent the approximate \(t_{\text {MRS}}\) and q computed from the reconstruction of ancestral states, and the closest extant species whose base is different from that of the target species, respectively Table 4 Effects of filtering by probability q of no mutation Table 5 shows the comparison of \(t_{\text {MRS}}\) and other statistical measures computed from genomic alignments. We used the same alignment columns in the previous paragraph but with filtering with threshold \(q < 1\) for true q values. For this dataset, we computed Spearman's correlation coefficients with true \(t_{\text {MRS}}\) and other indicators: \(t_{\text {MRS}}(q<0.01,0.1,1)\) represents our algorithms with a few filtering criteria of q. 'reconstruction' and 'alignment' are the approximate methods described above with filtering based on q values computed by their respective method. 'entropy' represents the information entropy of base frequency of alignment column. 'pairwise' represents the ratio of the number of identical bases in \(n(n-1)/2\) possible base pairs of n bases in the alignment column. 'phastcons' represents the posterior probability of conserved region computed by PhastCons [8]. 'phylop' represents the negative p-value of conservation computed by PhyloP [9]. 'gerp' represents the estimated number of 'rejected mutations' compute by Gerp++ [10]. The table shows small correlation of conservation measures (entropy, pairwise, phastcons, phylop, gerp) with \(t_{\text {MRS}}\) and very high correlation of estimated \(t_{\text {MRS}}\) with strict filtering criterion \(q<0.01\). It shows our algorithms can accurately extract distinct evolutionary information which is difficult to extract with previous conservation measures. Table 5 Correlation with other conservation measures Genomic distribution of \(t_{\text {MRS}}\) We computed the time to the most recent substitution \(t_{\text {MRS}}\), its standard deviation \(\sigma\), and the probability q that there is no substitution for alignment columns uniformly sampled from the human genome. The scatter plot of \(t_{\text {MRS}}\) and q values (Fig. 6 (top left)) shows the probability of no substitution q tends to increase with increasing \(t_{\text {MRS}}\). However, the distribution is broad depending on the nucleotide patterns of alignment columns, and a non-zero fraction of sites have deep ancestral substitutions (i.e., large \(t_{\text {MRS}}\) and small q) within the Homo–Vertebrate lineage. The scatter plot of \(\sigma\) and q (Fig. 6 (right)) shows that the probability of no substitution q is very small if \(\sigma < 0.1\). The high peaks (the red regions) of these two figures show that a large number of alignment columns have \(t_{\text {MRS}}\sim 0.7\), \(\sigma \sim 0.4\), and \(q\sim 0.3\). For these sites, it is difficult to determine if there are substitutions within the interval of the Homo–Vertebrate lineage. Table 6 Evolutionary time of reduced concestors Distributions of \(t_{\text {MRS}}, \sigma\) and q in the human genome. The top panels show the sampling distribution of statistics \(t_{\text {MRS}}\)-q (left) and \(\sigma\)-q (right) in the human genome. In these panels, a total of 2,063,207 alignment columns were sampled from the human genome excluding repeat regions. The bottom panels show the densities of q (left) and \(t_{\text {MRS}}\) (with \(q<0.01\)) (right) for several types of genomic region: CDS, 5\('\)UTR, 3\('\)UTR, Intron, and Intergenic Figure 6 (bottom left) shows the density of q for each annotated genomic region. Compared to Intergenic, Intron, 3′UTR, and 5′UTR, CDS regions have a large fraction of sites with a high probability of no mutation, indicating many ancestral nucleotides that were fixed before the appearance of the vertebrate concestor. Since computed \(t_{\text {MRS}}\) values have less meaning if q is large, we filtered out sites with \(q>0.01\) and plotted the distributions of \(t_{\text {MRS}}\) values for the remaining sites (Fig. 6 (bottom right)). There are several peaks because some sites are guaranteed to experience the last substitution between specific interval of concestors. All regions have the highest peak around \(t_{\text {MRS}}\sim 0.1\), which is between the Simiiformes and Primate concestors. CDS regions have a large peak around \(t_{\text {MRS}}\sim 0.36\), which corresponds to between the Eutheria and Theria concestors. Concestor interval of the last substitution event We are generally interested in the substitutions that are associated with the evolution of unique features in the species that inherited them. In this respect, we want to know in which interval between two speciation events (i.e., between two concestor nodes) each \(t_{\text {MRS}}\) is located. In order to simplify the presentation, we reduced the concestor nodes from the full 19 concestors of the PhyloP tree to eight as shown in Table 6 and Fig. 7 in the following analyses of concestor intervals. Since the estimated \(t_{\text {MRS}}\) values can have a large standard deviation \(\sigma\), we consider intervals between all pairs of concestor nodes: Homo–Hominoidea, Homo–Mammalia, Mammalia–Vertebrata, etc. Then, we assign a concestor interval I to a site if \(q<0.01\) and if I is the smallest interval that contains a confidence interval \([t_{\text {MRS}}- 2\sigma , t_{\text {MRS}}+ 2\sigma ]\). Only about 4% of sites were assigned to any concestor interval by this method. Figure 8 (top) shows the frequency distribution of genomic sites that are assigned to some concestor interval, which shows that many sites are assigned to concestor intervals Hominoidea–Euarchontoglires, Hominoidea–Eutheria, or Homo–Euarchontoglires. Figure 8 (bottom) shows the same frequency distributions for each category of annotated genomic regions. The distributions, except that of CDS, are similar to each other. On the other hand, CDS regions have many deep ancestral intervals. Topological relationship of reduced concestors. We show the topology of simplified phylogenetic tree of 100 vertebrate species used in the analyses of concestor intervals. See Table 6 for the numerical values of evolutionary time Frequency of concestor intervals. The two panels show the frequencies of genomic sites categorized by the concestor intervals where their most recent substitutions occurred. The left panel shows the genomic distribution. The axes represent late (x-axis) and early (y-axis) ends of intervals. The right panel shows distributions for several types of genomic region: CDS, 5′UTR, 3′UTR, Intron, Intergenic, and Transcribed Enhancer. Only intervals with non-zero counts are shown in this panel Tissue-concestor interval correlations for transcribed enhancers Andersson et al. [23] identified genomic elements called transcribed enhancers in the human genome and other genomes where short RNAs are produced by bidirectional transcription as a result of chromatin opening. They showed transcribed enhancers often overlap with protein-binding marks such as ChIP-seq peaks or protein-binding motifs. They are also enriched in disease-associated single nucleotide polymorphisms (SNPs). Many transcribed enhancers are tissue-specific in that bidirectional transcription of short RNAs occurs frequently in specific tissues. They showed the expressions of a number of genes are well explained by those of a few transcribed enhancers upstream of the genes. Thus, we can see tissue specific enhancer activities for these transcribed enhancers. In the FANTOM5 enhancer atlas site [24], tissue-specific enhancers are annotated by using the UBERON tissue anatomy ontology and Cell Ontology [25, 26]. For example, 41 diverse tissues were assigned to 10-1335 differentially-expressed enhancers (see Additional file 1: Table S2) [24]. Using these data, we studied the tissue and concestor interval of the last substitution event as an example of screening evolutionarily important events that affected life designs of extant organisms. We computed \((t_{\text {MRS}}, \sigma , q)\) for each site of the transcribed enhancer regions, filtered out the sites with \(q>0.01\), and associated concestor intervals as described above. For each concestor interval, we list the enhancers that contain sites associated with the interval. We used the hypergeometric test to determine if the sites corresponding to a specific concestor interval are significantly enriched for the enhancers transcribed in a specific tissue type. Table 7 shows tissues that have the top five most significant p-values for some concestor interval (more details are discussed in Section 7 of Additional file 1). We find that the brain and Homo–Vertebrata interval association has the most significant p-value and Homo–Vertebrata sites are enriched threefold in brain-associated enhancers relative to expectations. The second tissue was meninx, which is also associated with the nervous system (Table 7). Figure 9 shows a few sampled alignment columns in a brain-specific enhancer, which are assigned to the Homo–Vertebrata interval. Alignment columns that have three or more nucleotides suggest there are some substitutions along the Homo–Vertebrata lineage, but the patterns of nucleotide types and the number of gaps makes it difficult to determine at what time point the substitution occurs. Thus, the assigned intervals are the most ambiguous for these alignment columns. Table 7 Tissue-concestor interval correlation for transcribed enhancers Examples of alignment columns. The figure shows the examples of alignment columns that include the concestor interval Homo–Vertebrata in the transcribed enhancer regions and show brain-specific RNA transcription. The y-axis represents nine example alignment columns and x-axis represents nucleotides of each column, in which gaps, ambiguous nucleotides, and unaligned regions are shown as blank. The species are aligned such that it conforms phylogenetic trees and sorted such that species more evolutionarily distant from humans are placed on the right Tissue-concestor interval correlations for genes We studied the correlation between the tissue-specificity and concestor intervals for genes in a similar manner as for transcribed enhancers. See Section 9 in Additional file 1 for detailed description of the method. Table 8 shows the top three tissues that have genes with sites corresponding to specific concestor intervals are shown. Within each tissue, the top three concestor intervals are shown. As compared to the corresponding Table 7 for transcribed enhancers, deeply ancestral intervals appear in the table, indicating the high level of conservation of exonic sequences. On the other hand, fold enrichment of concestor intervals are smaller than in transcribed enhancers which make it more difficult to infer the impact of the most recent mutations on the life design of extant species than in the case of transcribed enhancers. Table 8 Tissue-concestor interval correlation for genes We have developed algorithms to infer the time \(t_{\text {MRS}}\) to most recent substitution in the lineage from a given target species to the root of a phylogenetic tree. In order to filter out highly conserved sites and ambiguous sites where the confidence of estimated \(t_{\text {MRS}}\) is low, we also compute the probability q of no mutation and the standard deviation \(\sigma\) of \(t_{\text {MRS}}\). We computed these variables efficiently using dynamic programming algorithms on the phylogenetic tree such that the algorithms can be applied to multiple genomic alignments with 100 species. We have empirically checked the correctness of our algorithms by posterior sampling of mutation histories on the tree. Our algorithms are exact under the assumptions of the model: genome evolution follows a site-independent continuous-time Markov process along the phylogenetic tree. Our results also depend on the quality of Multiz alignment, which was debated previously [27]. Although alignment errors can be less influential if the corresponding leaf nodes are far from the target lineage, the incomplete coverage of sequenced genomes directly affects the number of sites whose \(t_{\text {MRS}}\) can be determined with confidence. We expect that the number of sites with confident \(t_{\text {MRS}}\) value will increase as the coverage of genome sequences improve in the future. We have applied our tool to 100-species multiple genome alignments with human target and obtained a frequency spectrum of concestor intervals that categorized the time points at which the last substitutions occurred. Furthermore, we studied the correlation between the frequency of concestor intervals and the tissue-specificity of transcribed enhancers and found that brain-specific transcribed enhancers are highly enriched among the sites with mutations in the human lineage. It may be very interesting to combine our method with genome editing experiments to see if nucleotide changes at the screened sites affect tissue functions. Blanchette M, Kent WJ, Riemer C, Elnitski L, Smit AF, Roskin KM, Baertsch R, Rosenbloom K, Clawson H, Green ED, Haussler D, Miller W. Aligning multiple genomic sequences with the threaded blockset aligner. Genome Res. 2004;14(4):708–15. CAS PubMed PubMed Central Article Google Scholar Kent WJ, Sugnet CW, Furey TS, Roskin KM, Pringle TH, Zahler AM, Haussler D. The human genome browser at UCSC. Genome Res. 2002;12(6):996–1006. UCSC Genome Browser. http://genome.ucsc.edu/. Accessed 15 Jun 2018. Felsenstein J. Evolutionary trees from DNA sequences: a maximum likelihood approach. J Mol Evol. 1981;17(6):368–76. CAS PubMed Article Google Scholar Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987;4(4):406–25. Huelsenbeck JP, Ronquist F. MRBAYES: Bayesian inference of phylogenetic trees. Bioinformatics. 2001;17(8):754–5. Murphy WJ, Eizirik E, O'Brien SJ, Madsen O, Scally M, Douady CJ, Teeling E, Ryder OA, Stanhope MJ, de Jong WW, Springer MS. Resolution of the early placental mammal radiation using Bayesian phylogenetics. Science. 2001;294(5550):2348–51. Siepel A, Bejerano G, Pedersen JS, Hinrichs AS, Hou M, Rosenbloom K, Clawson H, Spieth J, Hillier LW, Richards S, Weinstock GM, Wilson RK, Gibbs RA, Kent WJ, Miller W, Haussler D. Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes. Genome Res. 2005;15(8):1034–50. Pollard KS, Hubisz MJ, Rosenbloom KR, Siepel A. Detection of nonneutral substitution rates on mammalian phylogenies. Genome Res. 2010;20(1):110–21. Cooper GM, Stone EA, Asimenos G, Green ED, Batzoglou S, Sidow A. Distribution and intensity of constraint in mammalian genomic sequence. Genome Res. 2005;15(7):901–13. Garber M, Guttman M, Clamp M, Zody MC, Friedman N, Xie X. Identifying novel constrained elements by exploiting biased substitution patterns. Bioinformatics. 2009;25(12):54–62. Gu X, Fu YX, Li WH. Maximum likelihood estimation of the heterogeneity of substitution rate among nucleotide sites. Mol Biol Evol. 1995;12(4):546–57. Yang Z. Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: approximate methods. J Mol Evol. 1994;39(3):306–14. Siepel A, Pollard KS, Haussler D. New methods for detecting lineage-specific selection. In: Apostolico A, Guerra C, Istrail S, Pevzner PA, Waterman M, editors. Research in computational molecular biology., RECOMB. Lecture notes in computer scienceBerlin: Springer; 2006. Yang Z. Computational molecular evolution. Oxford: Oxford University; 2006. Karro JE, Peifer M, Hardison RC, Kollmann M, von Grunberg HH. Exponential decay of GC content detected by strand-symmetric substitution rates influences the evolution of isochore structure. Mol Biol Evol. 2008;25(2):362–74. Zhu C, Byrd RH, Norcedal J. L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization. ACM Trans Math Softw. 1997;23(4):550–60. Siepel A, Haussler D. Phylogenetic estimation of context-dependent substitution rates by maximum likelihood. Mol Biol Evol. 2004;21(3):468–88. Kiryu H. Sufficient statistics and expectation maximization algorithms in phylogenetic tree models. Bioinformatics. 2011;27(17):2346–53. Dawkins R. The Ancestor's tale. London: Weidenfeld and Nicolson; 1970. TMRS Software. https://github.com/hmatsu1226/SCODE. Accessed 15 Jun 2018. Fisher R. On the mathematical foundation of theoretical statistics. Philos Trans R Soc Lond Ser A. 1922;222:309–68. Andersson Rea. An atlas of active enhancers across human cell types and tissues. Nature. 2014;507(7493):455–61. FANTOM5 human enhancer tracks. http://slidebase.binf.ku.dk/human_enhancers/. Accessed 15 Jun 2018. Mungall CJ, Torniai C, Gkoutos GV, Lewis SE, Haendel MA. Uberon, an integrative multi-species anatomy ontology. Genome Biol. 2012;13(1):5. Bard J, Rhee SY, Ashburner M. An ontology for cell types. Genome Biol. 2005;6(2):21. Frith MC, Park Y, Sheetlin SL, Spouge JL. The whole alignment and nothing but the alignment: the problem of spurious alignment flanks. Nucleic Acids Res. 2008;36(18):5863–71. This work was supported by JSPS KAKENHI [Grant Numbers 16H01532, 17K00398] (H.K.). Department of Computational Biology and Medical Sciences, GSFS, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba, Japan Hisanori Kiryu & Yasuhiro Kojima Works Applications Co., Ltd., 1-12-32, Akasaka, Minato-ku, Tokyo, Japan Yuto Ichikawa Hisanori Kiryu Yasuhiro Kojima HK designed the project, developed the algorithms, and wrote the manuscript. KI and YK contributed to the development of algorithms and their implementation and computational experiments in the early stages of the study. All authors read and approved the final manuscript. Correspondence to Hisanori Kiryu. Detailed description of TMRS algorithms. Kiryu, H., Ichikawa, Y. & Kojima, Y. TMRS: an algorithm for computing the time to the most recent substitution event from a multiple alignment column. Algorithms Mol Biol 14, 23 (2019). https://doi.org/10.1186/s13015-019-0158-3
CommonCrawl
Hyperparameter Space Sense and sensitivity (and specificty and utility) A recent tweet from Ash Jogalekar got me thinking. List of compounds medicinal chemists wouldn't have bothered to pursue because they didn't fit "intuition" about "druglike" rules Metformin ($400M revenue) Cyclosporin (>$1B) Dimethyl fumarate (>$4B) In drug discovery, there will always be enough exceptions to the rules — Ash Jogalekar (@curiouswavefn) June 24, 2019 Translating it to more 'machine-learning-ish' language this means that the problem of predicting ultimately successful drug candidates is the pathological case where the cost of your predictions is really high and the rewards are higher but concentrated in a space defined by unmeasured covariates (a molecule might be a terrible drug for one indication but a really good drug for another and the space of possible/probable indications is vast and for the most part unknown). To make matters worse, it is also many times an out-of-distribution problem. Take for example natural products – which can be broadly defined as any chemical compound found in nature. They comprise more than half of the FDA-approved drugs and yet there are many examples where evolution has made them so chemically weird that there's no way of telling if they'll make a good drug. Keith Robison, in response to Ash's tweet, illustrates this case. 3⁄4 natural products. And how many NP would a chemist say "that looks good!"? — Keith Robison (@OmicsOmicsBlog) June 25, 2019 So yeah, that's why drug discovery is hard in a nutshell. While there are many ways of defining what is or is not druglike and there has been many methods from heuristics, to logic-based reasoning, to machine learning, to detect drug-like compounds, it ultimately boils down to making a decision under the knowledge that the cost/reward structure is highly skewed in ways we can't always predict. How can we go about and reason in this context? A nibble of decision theory Whenever you look for ways to frame an informed decision given data and a cost/reward structure, you inevitably run into normative decision theories – frameworks that deal with the most optimal decisions an agent can take to maximize its outcomes. There are two key abstract ingredients in these decision theories: the preferences that the agent has and the prospects of getting them. In many such theories, these preferences typically translate to some measure of utility and the prospects turn into probabilities. The most natural and intuitive way of combining utilities and probabilities is through the expected utility function, which is a sum over events of the utility of an each event weighed by its probability. While explored and informally defined by many, including Bernoulli, since at least the 1700's it was really until 1947 that Von Neumann and Morgenstern formalized this notion and discovered how tight the an agent's preferences were entwined with the probability of the outcomes. Objective probability In the Von Neumann and Morgenstern world, the agent has to pick between different worlds, called lotteries, where each event is assigned a probability of occurring. Further, an agent's preferences obeys four main rules: You can either prefer one lottery over another, vice-versa, or be indifferent; that is, the relationship is complete. Your preferences between lottery are transitive Given three lotteries with a set preference order, $L \leq M \leq N$, you can always combine $L$ and $N$ via weighted averages of their respective probabilities in such a way that they are at least equivalent or greater than $M$. That is, that there is no lottery that is so bad or so good that it always is better or worse than combinations of other lotteries. Given two lotteries where you prefer one over the other, this preference is maintained regardless of adding the same perturbation in probabilities to both. Turns out that in this scenario, given an order of preference of the lotteries, we can find a utility function that maps events to real utility values that when applied to each of the lotteries in expectation (weighed by their probabilities) such expected utilities follow the same order. This is also true in reverse, a utility function then defines a preferential order. Subjective probability In the Von Neumann-Morgensetern world, the risk of the lotteries encoded by the probabilities of the events, is absolute. The state of the world according to the agent and its beliefs is never modeled and ironically the agent really loses its agency since the lotteries encode its fate. A more agent-centric view of the world was put forth by Savage in 1954, in which the agent can act on the world via some state-action function according to its beliefs of what will happen next, and in which its actions will be tied with some utility function; a framework that is very similar to Markov decision processes and other reinforcement learning beasts. Interestingly, the same kind of link between expected utility and an agent's beliefs hold here as well. That is, given a utility function, we can find a probability distribution that encodes the agent's beliefs such that its actions maximize the utility function's expected utility – and vice versa. This analogous conclusion requires reasoning and comparing preferences between the state-action functions as well, and a mirror of Von Neumann-Morgenstern's axioms on lotteries (that perferences over state-action functions are complete, transitive, are that they can be perturbed both in ways their preference ordering changes or is maintained) is put in place to fulfill this. Sensitivity/specificity/prevalence trade-off decisions Let's return to our drug discovery case. Here, you have a heuristic/intuition/experimental assay/computational method that yields a go/no-go decision on whether to pursue the drug further. It all really boils down to three main factors: how good our decision making instrument is, how tractable is the real problem, and what is our utility function with regards to possible outcomes. Such a scenario is general and comes up a lot in other areas such as diagnostics. Following decision theory, we will want to combine all three into an expected utility we can examine. Let's attach some quantities to each of these three factors. To quantify how good our decision instrument is (e.g. the state of our test), we can use quantities such as sensitivity and specificity which could be quantified retroactively via the rate of true positives found by the test versus all true positives and the rate of true negatives found by the test versus all true negatives, respectively. To quantify how hard the problem is (e.g. the state of the world) we can use the prevalence of the true positives, that is, on average how many actual good drugs we can expect there to be in the general universe of molecules. Finally, the utility function will ultimately depend on how much the decision will cost (pursuing the drug further) and how much we can expect the benefits of finding a good drug to be. We can also add a third term, a penalization of sorts that happens when we chase a false positive, which we will call the follow-up cost, and which can be a real burden not only in drug discovery but in diagnostics and health policy as well (what if the detected cancer wasn't really there and we follow up with more invasive procedures?).These costs and rewards will ultimately be shaped by some functional form into a final utility function. This functional form can be as straightforward as linear but also as drastic as a mirrored double exponential modeling a high-risk/high-reward scenario, or dampened by a logarithm to signify diminishing returns, or tempered with an isoelastic function to model balanced risk aversion. A simple model Putting everything we enumerated above together, we can build a simple model of the decision making process. Let $se$ be the sensitivity of the test, $sp$ the specificity, $p$ the prevalence of the true positives in the general population, $c$ the cost of a go decision, $r$ the reward, $fup$ the follow-up cost as defined above, and $f$ be our utility functional form (e.g. linear, logarithmic, etc.). In all of this we will assume that the cost of the test itself is constant and therefore not included in the final utility values. We will also assume that the probability of finding a true positive is exactly the sensitivity $se$ and the probability of finding a true negative is exactly the specificity $sp$. In the general case, these probabilities correlate with the sensitivity and specificity but are not expected to be the same thing (especially in borderline out-of-distribution scenarios). There are four possible scenarios when performing a test: With probability $p$ we get an actual true positive. We then perform the test and with probability $se$ we find the true positive. Our reward is $f(r - c)$ which is weighed by the probability of this scenario $p \times se$ With probability $p$ we get an actual true positive. We then perform the test and with probability $1 - se$ we miss the true positive. Our utilty is $f( c )$ which is weighed by the probability of this scenario $p \times (1- se)$ With probability $1 - p$ we get an actual true negative. We then perform the test and with probability $sp$ we correctly decide not to pursue. Our utilty is $f(0)$ in this case since we do not take further action. This is weighed by the probability of this scenario $(1 - p) \times sp$ With probability $1 - p$ we get an actual true negative. We then perform the test and with probability $1 - sp$ we flag this, falsely, as a positive. We pay the follow-up cost and our utilty is $f(-c-fup)$ which is weighed by the probability of this scenario $(1 - p) \times (1 - sp)$ Summing all these quantities, weighed by the probabilities of each scenario playing out, will give us the expected utility. We could write out the expected utility and ponder on boundary cases, singularities, etc. But it's much more fun if we can visualize it interactively. I've written a small visualization tool for this simple model, which you can use to explore how utility changes with each choice in prevalence, cost, reward, utility functional form, etc. Here, I'm plotting utilty as a heatmap over sensitivity and specificity, so you can see how e.g. importance of sensitivity increases when prevalence decreases and specificity increases when prevalence increases. I clipped the min/max colormap values of the heatmap to always be on the [-1,1] range so you can see the changes more easily. Addendum: Causal Decision Theory One could go further than reasoning over an agent's beliefs and instead reason over the causal models the agent has of the world, a causal decision theory. Here, the literature gets murkier as there are many ways to go about and insert causal models into the framework – treating the agent's acts as a do-operator or treating the agent's model of the world as a causal model to name a couple. In any case, there seems to be no firm answer of what produces the best results and there are unfortunately too few tests of these theories, at least to my knowledge.
CommonCrawl
How find this minimun of this $\frac{xy}{x^5+xy+y^5}+\frac{yz}{y^5+yz+z^5}+\frac{xz}{x^5+xz+z^5}$ let $x,y,z>0$ and such $$x+y+z=1$$ Find minimum of the $$\dfrac{xy}{x^5+xy+y^5}+\dfrac{yz}{y^5+yz+z^5}+\dfrac{xz}{x^5+xz+z^5}$$ I have find this maximum note $$\dfrac{xy}{x^5+xy+y^5}\le -243\dfrac{x+y}{841}+\dfrac{23031}{1682}$$ see http://www.wolframalpha.com/input/?i=xy%2F%28x%5E5%2Bxy%2By%5E5%29%2B243*%28x%2By%29%2F841-23031%2F1682 But for minimum value,I can't,(maybe use AM-GM) Thank you, china mathchina math $\begingroup$ I can't see it being anything except $81/29.$ If all three have to be positive, then there will be trade-offs if $x \neq y \neq z$ because of the cyclic symmetry of the function to minimize. So $x = y = z = 1/3.$ But I can't prove it. $\endgroup$ – John Jan 8 '14 at 4:14 $\begingroup$ For $g(x,y)=xy/(x^5+xy+y^5)$ if $\partial_x^2 g(x,y) < 0$ for $0<x,y\leq 1$, which some graphing may suggest, then I think there is an argument using Lagrange multipliers that $x=y=z$ is where the min occurs. But it is too late now... $\endgroup$ – abnry Jan 8 '14 at 4:41 $\begingroup$ Lagrange multipliers looks hairy. I'm guessing there's a trick. $\endgroup$ – John Jan 8 '14 at 5:09 $\begingroup$ @John Lagrange multipliers work if $g_x(x,y)$ is one-to-one for all $y<1$. All the symmetries work out otherwise in just a fine manner (with some argument). $\endgroup$ – abnry Jan 8 '14 at 5:48 The max is $\dfrac{81}{29}$ and min is $\dfrac{.25}{2*(0.5)^5+.25}$, a simple method is let two varies equal$(x=y)$ and have a function of $x$, see graphic below: edit :to prove max, consider $xy \le \dfrac{x^2+y^2}{2}, x^5+y^5 \ge 2(\dfrac{x^2+y^2}{2})^{\frac{5}{2}},\dfrac{xy}{x^5+xy+y^5}\le \dfrac{x^2+y^2}{x^2+y^2+4(\dfrac{x^2+y^2}{2})^{\frac{5}{2}}}=\dfrac{p}{p+4(\dfrac{p}{2})^{\frac{5}{2}}}=f(p)$ $f(p)$ is concave and mono decreasing function and $x^2+y^2+z^2 \ge \dfrac{1}{3}$ ,so we can quickly know the max will be got when $x=y=z$ edited Jan 8 '14 at 9:09 chenbaichenbai $\begingroup$ Hello,the max is how prove it? becasue I think $$\dfrac{xy}{x^5+xy+y^5}\le\dfrac{23031}{1682}-\dfrac{243}{841}(x+y)$$ is not easy prove $\endgroup$ – math110 Jan 8 '14 at 7:42 $\begingroup$ @math110 this value is wrong!$\dfrac{xy}{x^5+xy+y^5}< 1 \implies $ max $<3$,but $\dfrac{23031}{1682}>13,x+y<1 \to $ Max $>12$ $\endgroup$ – chenbai Jan 8 '14 at 7:51 Not the answer you're looking for? Browse other questions tagged inequality or ask your own question. How find this minimum of $\sqrt{(x-a)^2+(3-x-\lg{(a)})^2}+\sqrt{(x-b)^2+(3-x-10^{b})^2}$ How prove this $f(x,y)=(3x+y-7)(1+x^2+xy)+9\ge 0$ How find this maximum $Aa^2+Bb^2+Cc^2$ if $a+b+c=1$ How prove this inequality $\sum_{k=2}^{49}\frac{1}{k^2}\ge\frac{9}{10}\ln{2}$? How find the minimum of $ab+\frac{1}{a^2}+\frac{1}{b^2}$ How find this $\frac{3x^3+125y^3}{x-y}$ minimum value How Prove $4x^3+8y^3+15xy^2-27x-54y+54\ge 0$ Proving $x+\sin x-2\ln{(1+x)}\geqslant0$ How find this minimum
CommonCrawl
Edmunds, David E. ; Rákosník, Jiří On a higher-order Hardy inequality. (English). Mathematica Bohemica, vol. 124 (1999), issue 2, pp. 113-121 MSC: 26D10, 31C15, 42B25, 46E35 | MR 1780685 | Zbl 0936.31010 | DOI: 10.21136/MB.1999.126250 Hardy inequality; capacity; maximal function; Sobolev space; $p$-thick set The Hardy inequality $\int_\Omega|u(x)|^pd(x)^{-p}\dd x\le c\int_\Omega|\nabla u(x)|^p\dd x$ with $d(x)=\operatorname{dist}(x,\partial\Omega)$ holds for $u\in C^\infty_0(\Omega)$ if $\Omega\subset\Bbb R^n$ is an open set with a sufficiently smooth boundary and if $1<p<\infty$. P. Hajlasz proved the pointwise counterpart to this inequality involving a maximal function of Hardy-Littlewood type on the right hand side and, as a consequence, obtained the integral Hardy inequality. We extend these results for gradients of higher order and also for $p=1$. [1] D. R. Adams L. I. Hedberg: Function spaces and potential theory. Springer, Berlin, 1996. MR 1411441 [2] D. E. Edmunds H. Triebel: Function spaces, entropy numbers and differential operators. Cambridge Tracts in Mathematics, vol. 120, Cambridge University Press, Cambridge, 1996. MR 1410258 [3] D. Gilbarg N. S. Trudinger: Elliptic partial differential equations of second order. (2nd ed.), Springer, Berlin, 1983. MR 0737190 [4] P. Hajlasz: Pointwise Hardy inequalities. Proc. Amer. Math. Soc. 127 (1999), 417-423. DOI 10.1090/S0002-9939-99-04495-0 | MR 1458875 | Zbl 0911.31005 [5] J. Heinonen T. Kilpeläinen O. Martio: Nonlinear potential theory of degenerate elliptic equations. Oxford Science Publications, Clarendon. Press, Oxford, 1993. MR 1207810 [6] J. Kinnunen O. Martio: Hardy's inequalities for Sobolev functions. Math. Res. Lett. 4 (1997), no. 4, 489-500. DOI 10.4310/MRL.1997.v4.n4.a6 | MR 1470421 [7] J. L. Lewis: Uniformly fat sets. Trans. Amer. Math. Soc. 308 (1988), no. 1, 177-196. DOI 10.1090/S0002-9947-1988-0946438-4 | MR 0946438 | Zbl 0668.31002 [8] V. G. Maz'ya: Sobolev spaces. Springer, Berlin, 1985. MR 0817985 | Zbl 0727.46017 [9] P. Mikkonen: On the Wolff potential and quasilinear elliptic equations involving measures. Ann. Acad. Sci.Fenn.Ser. A.I. Math, Dissertationes 104 (1996), 1-71. MR 1386213 | Zbl 0860.35041 [10] B. Opic A. Kufner: Hardy-type inequalities. Pitman Research Notes in Math. Series 219, Longman Sci. &Tech., Harlow, 1990. MR 1069756 [11] E. M. Stein: Singular integrals and differentiability properties of functions. Princeton University Press, Princeton, N.J., 1970. MR 0290095 | Zbl 0207.13501 [12] A. Wannebo: Hardy inequalities. Proc. Amer. Math. Soc. 109 (1990), no. 1, 85-95. DOI 10.1090/S0002-9939-1990-1010807-1 | MR 1010807 | Zbl 0715.26009
CommonCrawl
Boundary value problems of elliptic and parabolic type with boundary data of negative regularity Felix Hummel ORCID: orcid.org/0000-0002-2374-70301 Journal of Evolution Equations volume 21, pages 1945–2007 (2021)Cite this article We study elliptic and parabolic boundary value problems in spaces of mixed scales with mixed smoothness on the half-space. The aim is to solve boundary value problems with boundary data of negative regularity and to describe the singularities of solutions at the boundary. To this end, we derive mapping properties of Poisson operators in mixed scales with mixed smoothness. We also derive \(\mathcal {R}\)-sectoriality results for homogeneous boundary data in the case that the smoothness in normal direction is not too large. Avoid the most common mistakes and prepare your manuscript for journal editors. In recent years, there were some efforts to generalize classical results on the bounded \(\mathcal {H}^\infty \)-calculus ([7, 8, 13, 14]) and maximal regularity ([8, 9, 11, 12, 21]) of elliptic and parabolic equations to cases in which rougher boundary data can be considered. The main tool in order to derive these generalizations is spatial weights, especially power weights of the form $$\begin{aligned} w_r^{\partial \mathcal {O}}(x):={\text {dist}}(x,\partial \mathcal {O})^r \quad (x\in \mathcal {O}), \end{aligned}$$ which measure the distance to the boundary of the domain \(\mathcal {O}\subset \mathbb {R}^n\). Including weights which fall outside the \(A_p\)-range, i.e., weights with \(r\notin (-1,p-1)\), provides a huge flexibility concerning the smoothness of the boundary data which can be considered. We refer the reader to [32] in which the bounded \(\mathcal {H}^\infty \)-calculus for the shifted Dirichlet Laplacian in \(L_p(\mathcal {O},w_r^{\partial \mathcal {O}})\) with \(r\in (-1,2p-1){\setminus }\{p-1\}\) has been obtained and applications to equations with rough boundary data are given. One even obtains more flexibility if one studies boundary value problems in weighted Besov and Triebel–Lizorkin spaces. Maximal regularity results for the heat equation with inhomogeneous boundary data have been obtained in [30]. In [22], similar results were shown for general elliptic and parabolic boundary value problems. The elliptic and parabolic equations we are interested in are of the form $$\begin{aligned} \lambda u -A(D)u&=f\quad \;\text {in }\quad \mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\quad \mathbb {R}^{n-1}\quad (j=1,\ldots ,m), \end{aligned}$$ $$\begin{aligned} \partial _t u -A(D)u&=f\quad \;\text {in }\quad \mathbb {R}\times \mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\quad \mathbb {R}\times \mathbb {R}^{n-1}\quad (j=1,\ldots ,m), \end{aligned}$$ where \(A,B_1,\ldots ,B_m\) is a homogeneous constant-coefficient parameter-elliptic boundary system, f is a given inhomogeneity and the \(g_j\) \((j=1,\ldots ,m)\) are given boundary data. Of course, f and the \(g_j\) in (1-2) may depend on time. We will also study the case in which (1-2) is supplemented by initial conditions, i.e., $$\begin{aligned} \partial _t u -A(D)u&=f\quad \;\text {in }\quad (0,T]\times \mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\quad (0,T]\times \mathbb {R}^{n-1}\quad (j=1,\ldots ,m),\nonumber \\ u(0,\,\cdot \,)&=u_0 \end{aligned}$$ for some \(T\in \mathbb {R}_+\). The focus will lie on the systematic treatment of boundary conditions \(g_j\) which are only assumed to be tempered distributions. In particular, boundary data of negative regularity will be included. However, we still have some restrictions on the smoothness in time of the boundary data for (1-3). One reason for the interest in the treatment of rougher boundary data is that they naturally appear in problems with boundary noise. The fact that white noise terms have negative pathwise regularity (see for example [4, 16, 47]) was one of the main motivations for this work. It was already observed in [6] that even in one dimension solutions to equations with Gaussian boundary noise only have negative regularity in an unweighted setting. By introducing weights, this issue was resolved for example in [1]. We also refer to [5] in which the singularities at the boundary of solutions of Poisson and heat equation with different kinds of noise are analyzed. One drawback of the methods in [1, 5, 6] is that solutions are constructed in a space which is too large for traces to exist, i.e., the operators $$\begin{aligned} {\text {tr}}_{\partial \mathcal {O}}B_j(D)\quad (j=1,\ldots ,m) \end{aligned}$$ are not well defined as operators from the space in which the solution is constructed to the space of boundary data. This problem is avoided by using a mild solution concept, which is a valid approach in the classical setting and therefore, it seems reasonable to accept mild solutions as good enough, even though \({\text {tr}}_{\partial \mathcal {O}}B_j(D)u\) does not make sense on its own. In this paper, we propose a point of view which helps us to give a meaning to \({\text {tr}}_{\partial \mathcal {O}}B_j(D)u\) in a classical sense. We will exploit that solutions to (1-1), (1-2) and (1-3) are very smooth in normal directions so that taking traces will easily be possible, even if the boundary data is just given by tempered distributions. This can be seen by studying these equations in spaces of the form \(\mathscr {B}^k(\mathbb {R}_{+},\mathscr {A}^s(\mathbb {R}^{n-1}))\), where \(\mathscr {A}\) and \(\mathscr {B}\) denote certain scales of function spaces with smoothness parameters s and k, respectively. The parameter k corresponds to the smoothness in normal direction and will be taken large enough so that we can take traces and the parameter s corresponds to smoothness in tangential directions and will be taken small enough so that \(\mathscr {A}^s\) contains the desired boundary data. This way, we will not only be able to give a meaning to \({\text {tr}}_{\partial \mathcal {O}}B_j(D)u\), but we will also obtain tools which help us to analyze the singularities of solutions at the boundary. This supplements the quantitative analysis in [22, 30, 32]. The idea to use spaces with mixed smoothness is quite essential in this paper, even if one refrains from using mixed scales. We refer to [42, Chapter 2] for an introduction to spaces with dominating mixed smoothness. It seems like these spaces have not been used in the theory of partial differential equations so far. Nonetheless, we should mention that they are frequently studied in the theory of function spaces and have various applications. In particular, they are a classical tool in approximation theory in a certain parameter range, see for example [46, Chapter 11]. This paper is structured in the following way: Section 2 briefly introduces the tools and concepts we use throughout the paper. This includes some notions and results from the geometry of Banach spaces, \(\mathcal {R}\)-boundedness and weighted function spaces. In Sect. 3, we study pseudo-differential operators in mixed scales with mixed smoothness. This will be important for the treatment of Poisson operators, as we will view them as functions in normal direction with values in the space of pseudo-differential operators of certain order in tangential directions. Section 4 is the central part of this paper and the basis for the results in the subsequent sections. Therein, we derive various mapping properties of Poisson operators with values in spaces of mixed scales and mixed smoothness. In Sect. 5, we study Eq. (1-1) in spaces of mixed scales and mixed smoothness with homogeneous boundary data, i.e., with \(g_j=0\). We derive \(\mathcal {R}\)-sectoriality of the corresponding operator under the assumption that the smoothness in normal direction is not too high. As a consequence, we also obtain maximal regularity for (1-3) with \(g_j=0\) in the UMD case. Finally, we apply our techniques to Eqs. (1-1), (1-2) and (1-3) in Sect. 6. We will be able to treat (1-1) and (1-2) for arbitrary regularity in space and time. However, for the initial boundary value problem (1-3) we still have some restrictions concerning the regularity in time of the boundary data. Comments on localizations We should emphasize that we do not address questions of localization or perturbation in this work. Thus, we do not yet study what kind of variable coefficients or lower-order perturbations of the operators we can allow. We also do not yet study how our results can be transferred to more general geometries than just the half-space. Nonetheless, we want to give some ideas on how one can proceed. The usual approach to transfer results for boundary value problems from the model problem on the half-space with constant coefficients to more general domains with compact smooth boundary and operators with non-constant coefficients is quite technical but standard. One takes a cover of the boundary which is fine enough such that in each chart the equation almost looks like the model problem with just a small perturbation on the coefficients and the geometry. In order to formally treat the local problem in a chart as the model problem, one has to find suitable extensions of the coefficients and the geometry such that the parameter-ellipticity is preserved and such that the coefficients are constant up to a small perturbation. One also carries out similar steps in the interior of the domain, where the equation locally looks like an elliptic or parabolic equation in \(\mathbb {R}^n\). The essential step is then to derive perturbation results which justify that these small perturbations do not affect the property which one wants to transfer to the more general situation. Such a localization procedure has been carried out in full detail in [36]. We also refer the reader to [8, Chapter 8]. The localization approach also seems to be reasonable in our situation. However, there is an additional difficulty: As described above, we work in spaces of the form \(\mathscr {B}^k(\mathbb {R}_{+},\mathscr {A}^s(\mathbb {R}^{n-1}))\) which splits the half-space in tangential and normal directions. Since this splitting uses the geometric structure of the half-space, one might wonder what the right generalization of these spaces to a smooth bounded domain would be. In order to answer this question, we think that the notion of a collar of a manifold with boundary should be useful. More precisely, due to Milnor's collar neighborhood theorem (see for example [38, Corollary 3.5]) there exists an open neighborhood U of the boundary \(\partial M\) of a smooth manifold M which is diffeomorphic to \(\partial M\times [0,1)\). This neighborhood U is a so-called collar neighborhood. On \(\partial M\times [0,1)\) it is straightforward how to generalize our spaces with mixed smoothness: One could just take \(\mathscr {B}^k([0,1),\mathscr {A}^s(\partial M))\). Now one could define the space $$\begin{aligned} \mathscr {B}^k(\mathscr {A}^s(U)):=\big \{u:U\rightarrow \mathbb {C}\;\vert \; u\circ \Phi ^{-1}\in \mathscr {B}^k([0,1),\mathscr {A}^s(\partial M))\big \}, \end{aligned}$$ where \(\Phi :U\rightarrow \partial M\times [0,1)\) denotes the diffeomorphism provided by the collar neighborhood theorem. It seems natural to endow \(\mathscr {B}^k(\mathscr {A}^s(U))\) with the norm $$\begin{aligned} \Vert u\Vert _{\mathscr {B}^k(\mathscr {A}^s(U))}:=\Vert u\circ \Phi ^{-1} \Vert _{\mathscr {B}^k([0,1),\mathscr {A}^s(\partial M))}. \end{aligned}$$ This definition allows us to give a meaning to the splitting in normal and tangential directions for a neighborhood of the boundary of general domains. It is less clear how to extend this splitting to the interior of the domain. But fortunately, this is also not important in our analysis. Indeed, the solution operators for inhomogeneous boundary data have a strong smoothing effect so that solutions are arbitrarily smooth in the interior of the domain with continuous dependence on the boundary data. Therefore, in the case of smooth coefficients one may work with smooth functions in the interior. To this end, we can take another open set \(V\subset {\text {int}} M\), where \({\text {int}} M\) denotes the interior of M, such that \(M=U \cup V\) and \(\Phi (U\cap V)\subset \partial M\times [\tfrac{1}{2},1)\). Moreover, we take \(\phi ,\psi :C^{\infty }(M)\) such that \({\text {supp}}\phi \subset U\), \({\text {supp}}\psi \subset V\) and \(\phi +\psi \equiv 1\). Finally, we think that on bounded domains the right spaces should be given by $$\begin{aligned} \{u:M\rightarrow \mathbb {C}\;\vert \; \phi \cdot u\in \mathscr {B}^k(\mathscr {A}^s(U)),\; \psi \cdot u\in C^{\infty }(V) \}, \end{aligned}$$ since they preserve the splitting in normal and tangential directions close to the boundary and since they use the smoothing effect of the solution operators where the splitting cannot be preserved anymore. Comparison to other works There are several other works which study boundary value problems with rough boundary data. The approach which seems to be able to treat the most boundary data is the one by Lions and Magenes, see [33,34,35]. It also allows for arbitrary regularity at the boundary. One of the main points is that the trace operator is extended to the space \(D_A^{-r}\) (see [33, Theorem 6.5]) where it maps into a boundary space with negative regularity. The space \(D_A^{-r}\) contains those distributions \(u\in H^{-r}(\Omega )\) such that Au is in the dual of the space $$\begin{aligned} \Xi ^{r+2m}:=\big \{u\;\vert \;{\text {dist}}(\,\cdot \,,\partial \Omega )^{|\alpha |}\partial ^{\alpha }u\in L_2(\Omega ),\;|\alpha |\le r+2m\bigg \}, \end{aligned}$$ i.e., \(\Xi ^{r+2m}\) contains functions whose derivatives may have singularities at the boundary with a certain order. This extension of the trace operator was generalized to the scale of Hörmander spaces in [3], where the authors used suitable interpolation techniques. One advantage of our approach compared to [3, 33,34,35] is that we can give a detailed quantitative analysis of the smoothness and singularities of solutions at the boundary. For example, we show in Theorem 4.16 that solutions of (1-1) are arbitrarily smooth in normal direction if the smoothness in tangential direction is chosen low enough. Moreover, we describe the singularities at the boundary if the smoothness in tangential direction is too high. Another technique to treat rougher boundary data is to systematically study boundary value problems in weighted spaces. This has for example been carried out in [22, 30, 32]. By using power weights in the \(A_{\infty }\) range, one can push the regularity on the boundary down to almost 0. However, if one works with \(A_{\infty }\) weights, then many Fourier analytic tools are not available anymore in the \(L_p\) scale. The situation is better in Besov and Triebel–Lizorkin spaces, where Fourier multiplier techniques can still be applied. This has been used for second-order operators with Dirichlet boundary conditions in [30] and for general parameter-elliptic and parabolic boundary value problems in [22]. In both references, maximal regularity with inhomogeneous boundary data has been derived. Lindemulder and Veraar [32] derive a bounded \(\mathcal {H}^\infty \)-calculus for the Dirichlet Laplacian in weighted \(L_p\)-spaces even for some weights which fall outside the \(A_p\) range. The main tool therein to replace Fourier multiplier techniques are variants of Hardy's inequality. The results derived in [22, 30, 32] are stronger than the ones we derive here in the sense that we do not derive maximal regularity with inhomogeneous boundary data or a bounded \(\mathcal {H}^\infty \)-calculus. As in our work, the singularities at the boundary are described by the strength of the weights one has to introduce. However, we can treat much more boundary data since [22, 30, 32] are restricted to positive regularity on the boundary. There are also works dealing with rough boundary data for nonlinear equations such as the Navier–Stokes equation, see for example [2, 19] and references therein. The former reference uses the notion of very weak solutions as well as semigroup and interpolation–extrapolation methods. The latter reference studies the problem in the context of the Boutet de Monvel calculus. Our methods would still have to be extended to nonlinear problems. However, in both of the cited works there are restrictions on the regularity of the boundary data which can be considered. Notations and assumptions We write \(\mathbb {N}=\{1,2,3,\ldots \}\) for the natural numbers starting from 1 and \(\mathbb {N}_0=\{0,1,2,\ldots \}\) for the natural numbers starting from 0. Throughout the paper, we take \(n\in \mathbb {N}\) to be the space dimension and write $$\begin{aligned} \mathbb {R}^n_+:=\{x=(x_1,\ldots ,x_n)\in \mathbb {R}^n: x_n>0\}. \end{aligned}$$ If \(n=1\), we also just write \(\mathbb {R}_+:=\mathbb {R}^1_+\). Given a real number \(x\in \mathbb {R}\), we write $$\begin{aligned} x_+:=[x]_+:=\max \{0,x\}. \end{aligned}$$ We will frequently use the notation with the brackets for sums or differences of real numbers. Oftentimes, we split \(x=(x',x_n)\in \mathbb {R}^{n-1}\times \mathbb {R}\) or in the Fourier image \(\xi =(\xi ',\xi _n)\) where \(x',\xi '\in \mathbb {R}^{n-1}\) refer to the directions tangential to the boundary \(\mathbb {R}^{n-1}=\partial \mathbb {R}^{n}_+\) and \(x_n,\xi _n\in \mathbb {R}\) refer to the normal directions. Given \(x\in \mathbb {C}^n\) or a multi-index \(\alpha \in \mathbb {N}_0^n\), we write $$\begin{aligned} |x|:=\bigg (\sum _{j=1}^n|x_j|^2\bigg )^{1/2}\quad \text {or}\quad |\alpha |=\sum _{j=1}^n|\alpha _j| \end{aligned}$$ for the Euclidean length of x or the \(\ell ^1\)-norm of the multi-index \(\alpha \), respectively. Even though this notation is ambiguous, it is convention in the literature and we therefore stick to it. We write $$\begin{aligned} xy:=x\cdot y:=\sum _{j=1}^n x_j\overline{y}_j\quad (x,y\in \mathbb {C}^n) \end{aligned}$$ for the usual scalar product. The Bessel potential will be denoted by $$\begin{aligned} \langle x\rangle :=(1+|x|^2)^{1/2}\quad (x\in \mathbb {C}^n). \end{aligned}$$ Given an angle \(\phi \in (0,\pi ]\), we write $$\begin{aligned} \Sigma _{\phi }:=\{z\in \mathbb {C}:|{\text {arg}} z|<\phi \}. \end{aligned}$$ If M is a set, then we use the notation $$\begin{aligned} {\text {pr}}_j:M^n\rightarrow M,\; (a_1,\ldots ,a_n)\rightarrow a_j\quad (j=1,\ldots ,n) \end{aligned}$$ for the canonical projection of \(M^n\) to the j-th component. Throughout the paper, E will denote a complex Banach space on which we impose additional conditions at certain places. The topological dual of a Banach space \(E_0\) will be denoted by \(E_0'\). By \(\mathscr {S}(\mathbb {R}^n;E)\) and \(\mathscr {S}'(\mathbb {R}^n;E)\) we denote the spaces of E-valued Schwartz functions and E-valued tempered distributions, respectively. Given a domain \(\mathcal {O}\subset \mathbb {R}^n\), we write \(\mathscr {D}(\mathcal {O};E)\) and \(\mathscr {D}'(\mathcal {O};E)\) for the spaces of E-valued test functions and E-valued distributions, respectively. If \(E=\mathbb {C}\) in some function space, then we will omit it in the notation. On \(\mathscr {S}(\mathbb {R}^n;E)\), we define the Fourier transform $$\begin{aligned} (\mathscr {F}f)(\xi ):=\frac{1}{(2\pi )^{n/2}}\int _{\mathbb {R}^n} e^{-ix\xi } f(x)\,\mathrm{d}x\quad (f\in \mathscr {S}(\mathbb {R}^n;E)). \end{aligned}$$ As usual, we extend it to \(\mathscr {S}'(\mathbb {R}^n;E)\) by \([\mathscr {F}u](f):=u(\mathscr {F}f)\) for \(u\in \mathscr {S}'(\mathbb {R}^n;E)\) and \(f\in \mathscr {S}(\mathbb {R}^n)\). Sometimes, we also use the Fourier transform \(\mathscr {F}'\) which only acts on the tangential directions, i.e., $$\begin{aligned} (\mathscr {F}'f)(\xi ',x_n):=\frac{1}{(2\pi )^{(n-1)/2}}\int _{\mathbb {R}^n} e^{-ix'\xi '} f(x',x_n)\,\mathrm{d}x'\quad (f\in \mathscr {S}(\mathbb {R}^n;E)). \end{aligned}$$ By \(\sigma (T)\) and \(\rho (T)\), we denote the spectrum and the resolvent set, respectively, of a linear operator \(T:E\supset D(T)\rightarrow E\) defined on the domain D(T). We write \(\mathcal {B}(E_0,E_1)\) for the set of all bounded linear operators from the Banach space \(E_0\) to the Banach space \(E_1\) and we set \(\mathcal {B}(E):=\mathcal {B}(E,E)\). If \(f,g:M\rightarrow \mathbb {R}\) map some parameter set M to the reals, then we occasionally write \(f\lesssim g\) if there is a constant \(C>0\) such that \(f(x)\le C g(x)\) for all \(x\in M\). If \(f\lesssim g\) and \(g\lesssim f\), we also write \(f\eqsim g\). We mainly use this notation in longer computations. Now we formulate our assumptions on the operators \(A(D),B_1(D),\ldots ,B_m(D)\): Let $$\begin{aligned} A(D)=\sum _{|\alpha |=2m}a_{\alpha }D^{\alpha },\quad B_j(D)=\sum _{|\beta |=m_j} b^j_{\beta }D^{\beta }\quad (j=1,\ldots ,m) \end{aligned}$$ for some \(m,m_1,\ldots ,m_m\in \mathbb {N}\) with \(m_j<2m\) \((j=1,\ldots ,m)\) and \(a_{\alpha },b^j_{\beta }\in \mathcal {B}(E)\). Assumption 1.1 (Ellipticity and Lopatinskii–Shapiro condition) There is a \(\phi '\in (0,\pi ]\) such that \(\rho (A(\xi ))\subset \Sigma _{\phi '}\) for all \(\xi \in \mathbb {R}^n{\setminus }\{0\}\). The equation $$\begin{aligned} \lambda u(x_n)-A(\xi ',D_n)u(x_n)&=0\quad \,\,(x_n>0),\\ B_j(\xi ',D_n)u(0)&=g_j\quad (j=1,\ldots ,m) \end{aligned}$$ has a unique continuous solution u with \(\lim _{x\rightarrow \infty } u(x)=0\) for all \((\lambda ,\xi ')\in \Sigma _{\phi '}\times \mathbb {R}^{n-1}\) and all \(g=(g_1,\ldots ,g_m)\in E^m\). We take \(\phi \in (0,\phi ')\). If time-dependent equations are considered, we assume that \(\phi >\pi /2\). Assumption 1.1 will be a global assumption which we assume to hold true without explicitly mentioning this every time. As we also consider mixed scales in this paper, there will be a lot of different choices of the precise spaces. Moreover, for the Bessel potential scale we will need different assumptions on the weights and the Banach space E than for the Besov scale, Triebel–Lizorkin scale, or their dual scales. Thus, it will be convenient to introduce a notation which covers all these different cases. Some of the notation and notions in the following assumption will be introduced later in Sect. 2. For the moment, we just mention that H denotes the Bessel potential scale, B the Besov scale, \(\mathcal {B}\) its dual scale, F the Triebel–Lizorkin scale and \(\mathcal {F}\) its dual scale. Let E be a Banach space, \(s_0,s_1,s_2\in \mathbb {R}\), \(p_0,p_1,p_2\in [1,\infty )\) and \(q_0,q_1,q_2\in [1,\infty ]\). Let further \(w_0,w_1,w_2\) be weights and \(I_{x_n},J_{t}\subset \mathbb {R}\) intervals. In the following, \(\bullet \) is a placeholder for any suitable choice of parameters. Moreover, by writing \(J_{t}\), \(I_{x_n}\) and \(\mathbb {R}^{n-1}_{x'}\) we indicate with respect to which variable the spaces should be understood. Here, t denotes the time, \(x_n\) the normal direction and \(x'\) the tangential directions. We take $$\begin{aligned} \mathscr {A}^\bullet \in \{H^\bullet _{p_0}(\mathbb {R}^{n-1}_{x'},&w_0;E), B^\bullet _{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E),F^\bullet _{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E),\\&\mathcal {B}^\bullet _{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E),\mathcal {F}^\bullet _{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E)\}. \end{aligned}$$ If \(\mathscr {A}^\bullet \) belongs to the Bessel potential scale, we assume that \(p_0\in (1,\infty )\), that E is a UMD space and that \(w_0\) is an \(A_p(\mathbb {R}^{n-1})\) weight. If \(\mathscr {A}^\bullet \) belongs to the Besov or Triebel–Lizorkin scale, we assume that \(w_0\) is an \(A_{\infty }(\mathbb {R}^{n-1})\) weight. If \(\mathscr {A}^\bullet \) belongs to the dual scale of Besov or Triebel–Lizorkin scale, we assume that \(w_0\) is an \([A_{\infty }(\mathbb {R}^{n-1})]_p'\) weight, \(p_0,q_0\in (1,\infty )\) and that E is a UMD space. $$\begin{aligned}&\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet )\in \{H^\bullet _{p_1}(I_{x_n},w_1;\mathscr {A}^\bullet ), B^\bullet _{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^\bullet ),F^\bullet _{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^\bullet ),\\&\qquad \mathcal {B}^\bullet _{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^\bullet ),\mathcal {F}^\bullet _{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^\bullet )\}. \end{aligned}$$ We impose conditions on \(w_1,p_1,q_1\) and E which are analogous to the ones for \(w_0,p_0,q_0\) and E in part (a). $$\begin{aligned}&\mathscr {C}^{\bullet }(J_t;\mathscr {A}^\bullet )\in \{H^\bullet _{p_2}(J_t,w_2;\mathscr {A}^\bullet ), B^\bullet _{p_2,q_2}(J_t,w_2;\mathscr {A}^\bullet ),F^\bullet _{p_2,q_2}(J_t,w_2;\mathscr {A}^\bullet ),\\&\qquad \mathcal {B}^\bullet _{p_2,q_2}(J_t,w_2;\mathscr {A}^\bullet ),\mathcal {F}^\bullet _{p_2,q_2}(J_t,w_2;\mathscr {A}^\bullet )\}. \end{aligned}$$ $$\begin{aligned}&\mathscr {C}^{\bullet }(J_{t};\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet ))\in \{H^\bullet _{p_2}(J_{t},w_2;\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet )), B^\bullet _{p_2,q_2}(J_{t},w_2;\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet )),\\&\qquad F^\bullet _{p_2,q_2}(J_{t},w_2;\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet )), \}. \end{aligned}$$ Most of the time, we just write \(\mathscr {A}^s\), \(\mathscr {B}^k(\mathscr {A}^s)\) and \(\mathscr {C}^l(\mathscr {B}^k(\mathscr {A}^s))\) instead of \(\mathscr {A}^s_{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E)\), \(\mathscr {B}^k_{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^s(\mathbb {R}^{n-1}_{x'},w_0;E))\) and \(\mathscr {C}^l_{p_2,q_2}(J_t,w_2;\mathscr {B}^k_{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^s(\mathbb {R}^{n-1}_{x'},w_0;E))\). We mainly do this in order to keep notations shorter. Moreover, most of the time we only work with the smoothness parameter so that adding the other parameters to the notation would be distracting. However, at some places we will still add some of the other parameters if more clarity is needed. Also Assumption 1.2 will be global and we use this notation throughout the paper. Assumption 1.2 is formulated in a way such that we can always apply Mikhlin's theorem, Theorem 2.15, and its iterated versions Proposition 3.7 and Proposition 3.8. If E has to satisfy Pisier's property \((\alpha )\) for some results, we will explicitly mention it. Note that every \(f\in \mathscr {S}'(\mathbb {R}^{n-1})\) is contained in one of the spaces \(\mathscr {A}^{\bullet }\) with certain parameters, see for example [26, Proposition 1]. Some notions from the geometry of Banach spaces If one wants to transfer theorems from a scalar-valued to a vector-valued situation, then this is oftentimes only possible if one imposes additional geometric assumptions on the Banach space. And since the iterated spaces we introduced in Assumption 1.2 are vector-valued even if we take \(E=\mathbb {R}\) or \(E=\mathbb {C}\), it should not come as a surprise that we have to introduce some of these geometric notions. We refer the reader to [23, 24] for an extensive treatment of the notions in this subsection. UMD spaces The importance of UMD spaces lies in the fact that Mikhlin's Fourier multiplier theorem has only been generalized for operator-valued symbols if the underlying Banach spaces are UMD spaces. Therefore, if one wants to work with Fourier multipliers on vector-valued \(L_p\)-spaces, one is forced to impose this geometric condition. A Banach space E is called UMD space if for all \(p\in (1,\infty )\) there is a constant \(C>0\) such that for all probability spaces \((\Omega ,\mathcal {F},\mathbb {P})\), all \(N\in \mathbb {N}\), all \(\varepsilon _1,\ldots ,\varepsilon _N\in \mathbb {C}\) with \(|\varepsilon _1|=\ldots =|\varepsilon _N|=1\), all filtrations \((\mathcal {F}_k)_{k=0}^N\) and all martingales \((f_k)_{k=0}^N\) in \(L_p(\Omega ;E)\) it holds that $$\begin{aligned} \bigg \Vert \sum _{k=1}^N \varepsilon _k (f_k-f_{k-1})\bigg \Vert _{L_p(\Omega ;E)}\le C \bigg \Vert \sum _{k=1}^N f_k-f_{k-1}\bigg \Vert _{L_p(\Omega ;E)}. \end{aligned}$$ This is equivalent to E being a Banach space of class \(\mathcal {HT}\), which is defined by the boundedness of the Hilbert transform on \(L_p(\mathbb {R};E)\). UMD spaces are always reflexive. Some important examples of UMD spaces are: Hilbert spaces, in particular the scalar fields \(\mathbb {R},\mathbb {C}\), the space \(L_p(S;E)\) for \(p\in (1,\infty )\), a \(\sigma \)-finite measure space \((S,\mathcal {A},\mu )\) and a UMD space E, the classical function spaces such as Bessel potential spaces \(H^{s}_p\), Besov spaces \(B^{s}_{p,q}\) and Triebel–Lizorkin spaces \(F^{s}_{p,q}\) in the reflexive range as well as their E-valued versions if E is a UMD space. Cotype In this work, Banach spaces satisfying a finite cotype assumption could be considered as merely a technical notion that is needed to derive Proposition 4.11 which is a sharper version of Proposition 4.9. The latter does not need a finite cotype assumption, while we show that it seems to be necessary to derive the former in Proposition 4.13. The main reason why we need finite cotype assumptions is that they allow us to use a version of Kahane's contraction principle with function coefficients, see Proposition 2.1. Let \((\Omega ,\mathcal {F},\mathbb {P})\) be a probability space. A sequence of random variables \((\varepsilon _k)_{k\in \mathbb {N}}\) is called Rademacher sequence if it is an i.i.d. sequence with \(\mathbb {P}(\varepsilon _k=1)=\mathbb {P}(\varepsilon _k=-1)=\frac{1}{2}\) for \(k\in \mathbb {N}\). A Banach space E is said to have cotype \(q\in [2,\infty ]\) if there is a constant \(C>0\) such that for all choices of \(N\in \mathbb {N}\) and \(x_1,\ldots ,x_N\in E\) the estimate $$\begin{aligned} \bigg (\sum _{k=1}^N \Vert x_k\Vert ^q\bigg )^{1/q}\le C\bigg (\mathbb {E}\big \Vert \sum _{k=1}^N\varepsilon _kx_k\big \Vert ^q\bigg )^{1/q} \end{aligned}$$ holds with the usual modification for \(q=\infty \). We want to remark the following Every Banach space has cotype \(\infty \). If a Banach space has cotype \(q\in [2,\infty )\), then it also has cotype \(\widetilde{q}\in [q,\infty ]\). No nontrivial Banach space can have cotype \(q\in [1,2)\) since even the scalar fields \(\mathbb {R}\) and \(\mathbb {C}\) do not satisfy this. If the Banach space E has cotype \(q_E\), then \(L_p(S;E)\) has cotype \(\max \{p,q_E\}\) for every measure space \((S,\mathcal {A},\mu )\). If the Banach space E has cotype \(q_E\), then \(H^{s}_p(\mathbb {R}^n;E)\) has cotype \(\max \{p,q_E\}\) and \(B^{s}_{p,q}(\mathbb {R}^n;E)\) and \(F^{s}_{p,q}(\mathbb {R}^n;E)\) have cotype \(\max \{p,q,q_E\}\). The same also holds for the weighted variants we introduce later. Pisier's property \((\alpha )\) Finally, we also need Pisier's property \((\alpha )\) at some places in this paper. This condition is usually needed if one wants to derive \(\mathcal {R}\)-boundedness from Mikhlin's multiplier theorem. If one has a set of \(\mathcal {R}\)-bounded operator-valued symbols, then one needs Pisier's property \((\alpha )\) in order to obtain the \(\mathcal {R}\)-boundedness of the resulting operator family. A Banach space E has Pisier's property \((\alpha )\) if Kahane's contraction principle also holds for double random sums, i.e., if for two Rademacher sequences \((\varepsilon '_i)_{i\in \mathbb {N}}\), \((\varepsilon ''_j)_{j\in \mathbb {N}}\) on the probability spaces \((\Omega ',\mathcal {F}',\mathbb {P}')\) and \((\Omega '',\mathcal {F}'',\mathbb {P}'')\), respectively, there is a constant \(C>0\) such that for all \(M,N\in \mathbb {N}\), all \((a_{ij})_{1\le i\le M,1\le j\le N}\subset \mathbb {C}\) with \(|a_{ij}|\le 1\) and all \((x_{ij})_{1\le i\le M,1\le j\le N}\subset E\) the estimate $$\begin{aligned} \mathbb {E}_{\mathbb {P'}}\mathbb {E}_{\mathbb {P''}}\bigg \Vert \sum _{i=1}^M\sum _{j=1}^N a_{ij}\varepsilon _{i}\varepsilon _{j}x_{ij} \bigg \Vert ^2\le C^2 \mathbb {E}_{\mathbb {P'}}\mathbb {E}_{\mathbb {P''}}\bigg \Vert \sum _{i=1}^M\sum _{j=1}^N \varepsilon _{i}\varepsilon _{j}x_{ij} \bigg \Vert ^2 \end{aligned}$$ holds. Even though Pisier's property \((\alpha )\) is independent of the UMD property, the examples of spaces with Pisier's property \((\alpha )\) we have in mind are similar: the space \(L_p(S;E)\) for \(p\in [1,\infty )\), a measure space \((S,\mathcal {A},\mu )\) and a Banach space E with Pisier's property \((\alpha )\), the classical function spaces such as Bessel potential spaces \(H^{s}_p\), Besov spaces \(B^{s}_{p,q}\) and Triebel–Lizorkin spaces \(F^{s}_{p,q}\) in the reflexive range as well as their E-valued versions if E has Pisier's property \((\alpha )\). \(\mathcal {R}\)-bounded Operator Families We refer the reader to [8, 24] for introductions to \(\mathcal {R}\)-bounded operator families. The notion of \(\mathcal {R}\)–boundedness is frequently needed if one works with vector-valued function spaces. As UMD spaces, it is essential for vector-valued generalizations of Mikhlin's multiplier theorem. But perhaps more importantly, it can be used to derive a necessary and sufficient condition for a closed linear operator \(A:E\supset D(A)\rightarrow E\) on the UMD space E to have the property of maximal regularity. This is the case if and only if it is \(\mathcal {R}\)–sectorial, i.e., if and only if the set $$\begin{aligned} \{\lambda (\lambda -A)^{-1}:\lambda \in \mathbb {C},\,\arg \lambda < \phi \} \end{aligned}$$ for some \(\phi >\pi /2\) is \(\mathcal {R}\)–bounded, see [49, Theorem 4.2]. Here, an operator A is said to have the property of maximal regularity on [0, T), \(0<T<\infty \), if the mapping $$\begin{aligned} W^1_p([0,T);X)\cap L_p([0,T);D(A))\rightarrow L_p([0,T);X)\times I_p(A),\;u\mapsto \begin{pmatrix}\partial _t u-Au \\ \gamma _0 u \end{pmatrix} \end{aligned}$$ is an isomorphism of Banach spaces, where \(\gamma _0u:= u(0)\) denotes the temporal trace and \(I_p(A)\) is the space of admissible initial conditions. It can be described as a real interpolation space of X and D(A) by the relation \(I_p(A):=(X,D(A))_{1-1/p.p}\). The above isomorphy is very useful for the treatment of nonlinear parabolic equations, as it allows for the efficient use of fixed point iterations. This approach to nonlinear equations has already been applied many times in the literature. Let us now define \(\mathcal {R}\)–boundedness: Let \(E_0,E_1\) be Banach spaces. A family of operators \(\mathcal {T}\subset \mathcal {B}(E_0,E_1)\) is called \(\mathcal {R}\)-bounded if there is a constant \(C>0\) and \(p\in [1,\infty )\) such that for a Rademacher sequence \((\varepsilon _k)_{k\in \mathbb {N}}\) on a probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and all \(N\in \mathbb {N}\), \(x_1,\ldots ,x_N\in E_0\) and \(T_1,\ldots ,T_N\in \mathcal {T}\) the estimate $$\begin{aligned} \left\| \sum _{k=1}^N \varepsilon _k T_k x_k \right\| _{L_p(\Omega ;E_1)}\le C\left\| \sum _{k=1}^N \varepsilon _k x_k \right\| _{L_p(\Omega ;E_0)} \end{aligned}$$ holds. The least admissible constant such that this estimate holds will be denoted by \(\mathcal {R}(\mathcal {T})\) or, if we want to emphasize the dependence on the Banach spaces, by \(\mathcal {R}_{\mathcal {B}(E_0,E_1)}(\mathcal {T})\). By the Kahane–Khintchine inequalities, the notion of \(\mathcal {R}\)-boundedness does not depend on p. \(\mathcal {R}\)-boundedness trivially implies uniform boundedness, but the converse does not hold true in general. For Hilbert spaces however, both notions coincide. An equivalent characterization of \(\mathcal {R}\)-boundedness can be given by using the \({\text {Rad}}_p(E)\)-spaces. They are defined as the space of all sequences \((x_k)_{k\in \mathbb {N}}\subset E\) such that \(\sum _{k=1}^{\infty } \varepsilon _k x_k\) converges in \(L_p(\Omega ;E)\). \({\text {Rad}}_p(E)\)-spaces are endowed with the norm $$\begin{aligned} \Vert (x_k)_{k\in \mathbb {N}}\Vert _{{\text {Rad}}_p(E)}=\sup _{N\in \mathbb {N}}\left\| \sum _{k=1}^N \varepsilon _k x_k \right\| _{L_p(\Omega ;E)}. \end{aligned}$$ Given \(T_1,\ldots ,T_N\in \mathcal {B}(E_0,E_1)\) we define $$\begin{aligned} {\text {diag}}(T_1,\ldots ,T_N):{\text {Rad}}_p(E_0)\rightarrow {\text {Rad}}_p(E_1),\,(x_k)_{k\in \mathbb {N}}\rightarrow (T_kx_k)_{k\in \mathbb {N}} \end{aligned}$$ where \(T_k:=0\) for \(k>N\). Then, a family of operators \(\mathcal {T}\subset \mathcal {B}(E_0,E_1)\) is \(\mathcal {R}\)-bounded if and only if $$\begin{aligned} \{{\text {diag}}(T_1,\ldots ,T_N): N\in \mathbb {N}, T_1,\ldots ,T_N\in \mathcal {T}\}\subset \mathcal {B}({\text {Rad}}_p(E_0),{\text {Rad}}_p(E_1)) \end{aligned}$$ is uniformly bounded. Let us now collect some results concerning \(\mathcal {R}\)-boundedness which will be useful in this paper. Proposition 2.1 Let E be a Banach space with cotype \(q\in [2,\infty )\), \((\varepsilon _k)_{k\in \mathbb {N}}\) a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and let \((S,\mathcal {A},\mu )\) be a \(\sigma \)-finite measure space. For all \(\widetilde{q}\in (q,\infty ]\) there exists a constant \(C>0\) such that for all \(N\in \mathbb {N}\), \(f_1,\ldots ,f_N\in L_{\widetilde{q}}(S)\) and \(x_1,\ldots ,x_N\in E\) it holds that $$\begin{aligned} \left\| \sum _{k=1}^N \varepsilon _k f_k x_k \right\| _{L_{\widetilde{q}}(S;L_2(\Omega ;E))}\le C \sup _{1\le k\le N} \Vert f_k\Vert _{L_{\widetilde{q}}(S)}\left\| \sum _{k=1}^N \varepsilon _kx_k\right\| _{L_2(\Omega ;E)}. \end{aligned}$$ If \(q\in \{2,\infty \}\), then we can also take \(\widetilde{q}=q\). This is one of the statements in [25, Lemma 3.1]. \(\square \) It was already observed in [25, Remark 3.3] that if \(\widetilde{q}<\infty \), then Proposition 2.1 can also be formulated as follows: The image of the unit ball \(B_{L^{\widetilde{q}}(S)}(0,1)\) in \(L^{\widetilde{q}}(S)\) under the embedding \(L^{\widetilde{q}}(S)\hookrightarrow \mathcal {B}(E,L^{\widetilde{q}}(S;E)),f\mapsto f\otimes (\,\cdot \,)\) is an \(\mathcal {R}\)-bounded subset of \(\mathcal {B}(E,L^{\widetilde{q}}(S;E))\). If \((S,\mathcal {A},\mu )\) is nonatomic and if (2-1) holds for all \(N\in \mathbb {N}\), \(f_1,\ldots ,f_N\in L_{\widetilde{q}}(S)\) and \(x_1,\ldots ,x_N\in E\), then E has cotype \(\widetilde{q}\). This follows from the statements in [25, Lemma 3.1]. Let \((A,\Sigma ,\nu )\) be a \(\sigma \)-finite measure space. Let further \(2\le \overline{q}<q<\infty \) and let \(\overline{E}\) be a Banach space with cotype \(\overline{q}\). If \(E=L_q(A;\overline{E})\), then (2-1) also holds with \(\widetilde{q}=q\). This was shown in [25, Remark 3.4]. Let \((E_0,E_1)\) and \((F_0,F_1)\) be interpolation couples of UMD-spaces, \(\Sigma \subset \mathbb {C}\) and \(f:\Sigma \rightarrow \mathbb {C}\). Let further \((T(\lambda ))_{\lambda \in \Sigma }\subset \mathcal {B}(E_0+E_1,F_0+F_1)\) be a collection of operators such that $$\begin{aligned} \mathcal {R}_{\mathcal {B}(E_0,F_0)}(\{T(\lambda ):\lambda \in \Sigma \})<M_0,\quad \mathcal {R}_{\mathcal {B}(E_1,F_1)}(\{f(\lambda )T(\lambda ):\lambda \in \Sigma \})<M_1 \end{aligned}$$ for some \(M_0,M_1>0\). We write \(E_{\theta }=[E_0,E_1]_{\theta }\) and \(F_{\theta }=[F_0,F_1]_{\theta }\) with \(\theta \in (0,1)\) for the complex interpolation spaces. Then, there is a constant \(C>0\) such that $$\begin{aligned} \mathcal {R}_{\mathcal {B}(E_\theta ,F_\theta )}(\{f(\lambda )^{\theta }T(\lambda ):\lambda \in \Sigma \})<CM_0^{1-\theta }M_1^{\theta }. \end{aligned}$$ In order to avoid possible ambiguities with complex exponentials, we assume that f takes values in \((0,\infty )\). As a consequence of Kahane's contraction principle ([24, Theorem 6.1.13]), we may do this without loss of generality. It suffices to show that $$\begin{aligned} \{{\text {diag}}(f(\lambda _1)^{\theta }T(\lambda _1),\ldots ,f(\lambda _N)^{\theta }T(\lambda _N):N\in \mathbb {N},\lambda _1,\ldots ,\lambda _N\in \Sigma \} \end{aligned}$$ is a bounded family in \(\mathcal {B}({\text {Rad}}_p(E_{\theta }),{\text {Rad}}_p(F_{\theta }))\). Let $$\begin{aligned} S:=\{z\in \mathbb {C}: 0\le {\text {{Re}}}z\le 1\}. \end{aligned}$$ For fixed \(N\in \mathbb {N}\) and \(\lambda _1,\ldots ,\lambda _N\in \Sigma \), we define $$\begin{aligned} \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}:S \rightarrow \mathcal {B}({\text {Rad}}_p(E_{0})&\cap {\text {Rad}}_p(E_{1}), {\text {Rad}}_p(F_{0})+ {\text {Rad}}_p(F_{1})),\\&\,z\mapsto {\text {diag}}( f(\lambda _1)^z T(\lambda _1),\ldots ,f(\lambda _N)^z T(\lambda _N)). \end{aligned}$$ For fixed \((x_k)_{k\in \mathbb {N}}\in {\text {Rad}}_p(E_{0})\cap {\text {Rad}}_p(E_{1})\), the mapping $$\begin{aligned} \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}(\,\cdot \,)(x_k)_{k\in \mathbb {N}}:S \rightarrow {\text {Rad}}_p(F_{0})+ {\text {Rad}}_p(F_{1}),&\,z\mapsto (f(\lambda _k)^zT(\lambda _k)x_k)_{k\in \mathbb {N}}, \end{aligned}$$ is continuous, bounded and analytic in the interior of S. Again, we used the convention \(T(\lambda _k)=0\) for \(k>N\). Moreover, by assumption we have that $$\begin{aligned} \sup _{t\in \mathbb {R}}\Vert \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}(j+it)\Vert _{\mathcal {B}({\text{ Rad }}_p(E_{j}),{\text{ Rad }}_p(F_{j}))}<M_j\quad (j\in \{0,1\}). \end{aligned}$$ Thus, it follows from abstract Stein interpolation (see [48, Theorem 2.1]) that $$\begin{aligned} \Vert \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}(\theta )\Vert _{\mathcal {B}({\text {Rad}}_p^{\theta }(E_{0},E_1),{\text {Rad}}_p^{\theta }(F_{0},F_1))}<M_0^{1-\theta }M_1^{\theta }, \end{aligned}$$ where we used the shorter notation \({\text {Rad}}_p^{\theta }(E_{0},E_1)=[{\text {Rad}}_p(E_{0}),{\text {Rad}}_p(E_{1})]_{\theta }\) in the subscript. But it was shown in [27, Corollary 3.16] that $$\begin{aligned}{}[{\text {Rad}}_p(E_0),{\text {Rad}}_p(E_1)]_{\theta }={\text {Rad}}_p(E_{\theta }) \end{aligned}$$ with equivalence of norms so that there is a constant \(C>0\) such that $$\begin{aligned} \Vert \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}(\theta )\Vert _{\mathcal {B}({\text {Rad}}_p(E_{\theta }),{\text {Rad}}_p(F_{\theta }))}<CM_0^{1-\theta }M_1^{\theta }. \end{aligned}$$ Since \(N\in \mathbb {N}\) and \(\lambda _1,\ldots ,\lambda _N\in \Sigma \) were arbitrary, we obtain the assertion. \(\square \) The proof of Proposition 2.3 was inspired by the proof of [20, Lemma 6.9]. Note that [20, Example 6.13] shows that Proposition 2.3 does not hold true if the complex interpolation functor is replaced by the real one. In Proposition 2.3, we only use the UMD assumption in order to show that the interpolation space of two Rademacher spaces coincides with the Rademacher space of the interpolation space of the underlying Banach spaces. This holds more generally for K-convex Banach spaces (see [24, Theorem 7.4.16]). We refrain from introducing K-convexity in order not to overload this paper with geometric notions. Note however that UMD spaces are K-convex, see [24, Example 7.4.8]. Let \(\psi \in (0,\pi )\) and let \(E_0,E_1\) be Banach spaces. Let further \(N:\overline{\Sigma }_{\psi }\rightarrow \mathcal {B}(E_0,E_1)\) be holomorphic and bounded on \(\Sigma _{\psi }\) and suppose that \(N|_{\partial \Sigma _\psi }\) has \(\mathcal {R}\)-bounded range. Then, the set $$\begin{aligned} \{\lambda ^k\big (\tfrac{d}{d\lambda }\big )^kN(\lambda ):\lambda \in \overline{\Sigma }_{\psi '}\} \end{aligned}$$ is \(\mathcal {R}\)-bounded for all \(\psi '<\psi \) and all \(k\in \mathbb {N}_0\). For \(k=0\) and \(k=1\), the proof is contained in [29, Example 2.16]. Other values of k can then be obtained by iteration. Note that the boundedness of N is necessary since Poisson's formula, which is used for \(k=0\), only holds for bounded functions. \(\square \) Definition 2.6 Let (X, d) be a metric space and \(E_0,E_1\) be Banach spaces. Let further \(U\subset \mathbb {R}^n\) be open and \(k\in \mathbb {N}_0\). We say that a function \(f:X\rightarrow \mathcal {B}(E_0,E_1)\) is \(\mathcal {R}\)-continuous if for all \(x\in X\) and all \(\varepsilon >0\) there is a \(\delta >0\) such that we have $$\begin{aligned} \mathcal {R}_{\mathcal {B}(E_0,E_1)}(\{f(y)-f(x): y\in B(x,\delta )\})<\varepsilon . \end{aligned}$$ We write \(C_{\mathcal {R}B}(X,\mathcal {B}(E_0,E_1))\) for the space of all \(\mathcal {R}\)-continuous functions \(f:X\rightarrow \mathcal {B}(E_0,E_1)\) with \(\mathcal {R}\)-bounded range. We say that a function \(f:X\rightarrow \mathcal {B}(E_0,E_1)\) is uniformly \(\mathcal {R}\)-continuous if for all \(\varepsilon >0\) there is a \(\delta >0\) such that for all \(x\in X\) we have We write \(BUC_{\mathcal {R}}(X,\mathcal {B}(E_0,E_1))\) for the space of all uniformly \(\mathcal {R}\)-continuous functions \(f:X\rightarrow \mathcal {B}(E_0,E_1)\) with \(\mathcal {R}\)-bounded range. We write \(C_{\mathcal {R}B}^k(U,\mathcal {B}(E_0,E_1))\) for the space of all \(f\in C^k(U,\mathcal {B}(E_0,E_1))\) such that \(\partial ^{\alpha }f\in C_{\mathcal {R}B}(U,\mathcal {B}(E_0,E_1))\) for all \(\alpha \in \mathbb {N}_0^n\), \(|\alpha |\le k\). Analogously, we write \(BUC_{\mathcal {R}}^k(U,\mathcal {B}(E_0,E_1))\) for the space of all \(f\in C^k(U,\mathcal {B}(E_0,E_1))\) such that \(\partial ^{\alpha }f\in BUC_{\mathcal {R}}(U\mathcal {B}(E_0,E_1))\) for all \(\alpha \in \mathbb {N}_0^n\), \(|\alpha |\le k\). Let \(U\subset \mathbb {R}^n\) be open, \(E_0,E_1\) Banach spaces and \(k\in \mathbb {N}_0\). Let further \(\mathcal {T}\subset C_{\mathcal {R}B}^k(X,\mathcal {B}(E_0,E_1))\) or \(\mathcal {T}\subset BUC_{\mathcal {R}}^k(X,\mathcal {B}(E_0,E_1))\). We say that \(\mathcal {T}\) is bounded if $$\begin{aligned} \sup _{f\in \mathcal {T}}\mathcal {R}(f^{(j)}(U):j\in \{0,\ldots ,k\})<\infty . \end{aligned}$$ We say that \(\mathcal {T}\) is \(\mathcal {R}\)-bounded if $$\begin{aligned} \mathcal {R}(\{f^{(j)}(x):x\in U, f\in \mathcal {T}, j\in \{0,\ldots ,k\}\})<\infty . \end{aligned}$$ Weighted function spaces Weights are an important tool to weaken the regularity assumptions on the data which are needed in order to derive well-posedness and a priori estimates for elliptic and parabolic boundary value problems, see for example [1, 5, 22, 30, 32]. Power weights, i.e., weights of the form \(w_\gamma (x):={\text {dist}}(x,\partial \mathcal {O})^\gamma \) which measure the distance to the boundary of the domain \(\mathcal {O}\subset \mathbb {R}^n\), are particularly useful for this purpose. Roughly speaking, the larger the value of \(\gamma \), the larger may the difference between regularity in the interior and regularity on the boundary be. This way, one can obtain arbitrary regularity in the interior while the regularity of the boundary data may be close to 0. However, there is an important borderline: If \(\gamma \in (-1,p-1)\), where p denotes the integrability parameter of the underlying function space, then \(w_\gamma \) is a so-called \(A_p\) weight. If \(\gamma \ge p-1\), then it is only an \(A_{\infty }\) weight. \(A_p\) weights are an important class of weights. These weights are exactly the weights w for which the Hardy–Littlewood maximal operator is bounded on \(L_p(\mathcal {O},w)\). Consequently, the whole Fourier analytic toolbox can still be used and many results can directly be transferred to the weighted setting. In the \(A_{\infty }\)-range however, this does not hold any longer. But in order to obtain more flexibility for the regularity of the boundary data which can be considered, one would like to go beyond the borderline and also work with \(A_{\infty }\) weights. This is possible if one works with weighted Besov or Triebel–Lizorkin spaces. As we will explain later, these scales of function spaces allow for a combination of \(A_{\infty }\) weights and Fourier multiplier methods. In our analysis, we want to include both cases: We treat the more classical situation with the Bessel potential scale and \(A_p\) weights, which include the classical Sobolev spaces, as well as the more flexible situation with Besov and Triebel–Lizorkin scales and \(A_{\infty }\) weights. Let us now give the precise definitions: Let \(\mathcal {O}\subset \mathbb {R}^n\) be a domain. A weight w on \(\mathcal {O}\) is a function \(w:\mathcal {O}\rightarrow [0,\infty ]\) which takes values in \((0,\infty )\) almost everywhere with respect to the Lebesgue measure. We mainly work with the classes \(A_p\) \((p\in (1,\infty ])\). A weight w on \(\mathbb {R}^n\) is an element of \(A_p\) for \(p\in (1,\infty )\) if and only if $$\begin{aligned}{}[w]_{A_p}:= \sup _{Q\text { cube in }\mathbb {R}^n}\bigg (\frac{1}{\lambda (Q)}\int _Q w(x)\,\mathrm{d}x\bigg )\bigg (\frac{1}{\lambda (Q)}\int _Q w(x)^{-\frac{1}{p-1}}\,\mathrm{d}x\bigg )^{p-1}<\infty . \end{aligned}$$ The quantity \([w]_{A_p}\) is called \(A_p\) Muckenhoupt characteristic constant of w. We define \(A_{\infty }:=\bigcup _{1<p<\infty } A_p\). Moreover, we write \([A_{\infty }]_p'\) for the space of all weights w such that the p-dual weight \(w^{-\frac{1}{p-1}}\) is in \(A_{\infty }\). We refer to [18, Chapter 9] for an introduction to these classes of weights. For \(p\in [1,\infty )\), a domain \(\mathcal {O}\subset \mathbb {R}^n\), a weight w and a Banach space E the weighted Lebesgue–Bochner space \(L_p(\mathcal {O},w;E)\) is defined as the space of all strongly measurable functions \(f:\mathcal {O}\rightarrow E\) such that $$\begin{aligned} \Vert f\Vert _{L_p(\mathcal {O},w;E)}:=\bigg (\int _{\mathcal {O}} \Vert f(x)\Vert _{E}^p w(x)\,\mathrm{d}x\bigg )^{1/p}<\infty . \end{aligned}$$ We further set \(L_{\infty }(\mathcal {O},w;E):=L_{\infty }(\mathcal {O};E)\). In addition, let \( L_1^{loc}(\mathcal {O};E)\) be the space of all locally integrable functions, i.e., strongly measurable functions \(f:\mathcal {O}\rightarrow E\) such that $$\begin{aligned} \int _K \Vert f(x)\Vert _E\,\mathrm{d}x<\infty \end{aligned}$$ for all compact \(K\subset \mathcal {O}\). As usual, functions which coincide on a set of measure 0 are considered as equal in these spaces. One has to be cautious with the definition of weighted Sobolev spaces. One would like to define them as spaces of distributions such that derivatives up to a certain order can be represented by functions in \(L_p(\mathcal {O},w;E)\). But for some weights, the elements of \(L_p(\mathcal {O},w;E)\) might not be locally integrable and thus, taking distributional derivatives might not be possible. Hölder's inequality shows that \(L_p(\mathcal {O},w;E)\subset L_1^{loc}(\mathcal {O},E)\) if \(w^{-\frac{1}{p-1}}\in L_1^{loc}(\mathcal {O})\). We refer to [28] for further thoughts in this direction. Let \(\mathcal {O}\subset \mathbb {R}^n\) be a domain, E a Banach space, \(m\in \mathbb {N}_0\), \(p\in [1,\infty )\) and w a weight on \(\mathcal {O}\) such that \(w^{-\frac{1}{p-1}}\in L_1^{loc}(\mathcal {O})\). We define the weighted Sobolev space \(W^m_p(\mathcal {O},w;E)\) by $$\begin{aligned}&W^m_p(\mathcal {O},w;E):=\{f\in L_p(\mathcal {O},w;E)\,|\,\forall \alpha \in \mathbb {N}_0^n,\,|\alpha |\\&\quad \le m : \partial ^\alpha f\in L_p(\mathcal {O},w;E)\} \end{aligned}$$ and endow with the norm \(\Vert f\Vert _{W^m_p(\mathcal {O},w;E)}:=\big (\sum _{|\alpha |\le m} \Vert f\Vert _{L_p(\mathcal {O},w;E)}^p\big )^{1/p}\). With the usual modifications, we can also define \(W^m_\infty (\mathcal {O},w;E)\). As usual, we define \(W^m_{p,0}(\mathcal {O},w;E)\) to be the closure of the space of test functions \(\mathscr {D}(\mathcal {O};E)\) in \(W^m_{p}(\mathcal {O},w;E)\). Let E be reflexive, \(w\in A_p\) and \(p,p'\in (1,\infty )\) conjugated Hölder indices, i.e., they satisfy \(1=\frac{1}{p}+\frac{1}{p'}\). Then, we define the dual scale \(W^{-m}_p(\mathcal {O},w;E):=(W^m_{p',0}(\mathcal {O},w^{-\frac{1}{p-1}};E'))'\). We further define weighted Bessel potential, Besov and Triebel–Lizorkin spaces. Since we use the Fourier analytic approach, we already define them as subsets of tempered distributions. Let E be a Banach space, \(s\in \mathbb {R}\), \(p\in [1,\infty ]\) and w a weight on \(\mathbb {R}^n\) such that \(w^{-\frac{1}{p-1}}\in L_1^{loc}(\mathbb {R}^n)\). Then, we define the weighted Bessel potential space \(H^s_p(\mathbb {R}^n,w;E)\) by $$\begin{aligned} H^s_p(\mathbb {R}^n,w;E):=\{f\in \mathscr {S}'(\mathbb {R}^n;E)\,|\,\langle D \rangle ^s f\in L_p(\mathbb {R}^n,w;E)\} \end{aligned}$$ and endow it with the norm \(\Vert f\Vert _{H^s_p(\mathbb {R}^n,w;E)}:=\Vert \langle D\rangle ^s f\Vert _{L_p(\mathbb {R}^n,w;E)}\). Definition 2.10 Let \(\varphi _0\in \mathscr {D}(\mathbb {R}^n)\) be a smooth function with compact support such that \(0\le \varphi _0\le 1\) and $$\begin{aligned} \varphi _0(\xi )=1\quad \text {if } |\xi |\le 1,\qquad \varphi _0(\xi )=0\quad \text {if }|\xi |\ge 3/2. \end{aligned}$$ For \(\xi \in \mathbb {R}^n\) and \(k\in \mathbb {N}\), let further $$\begin{aligned} \varphi (\xi )&:=\varphi _0(\xi )-\varphi _0(2\xi ),\\ \varphi _k(\xi )&:=\varphi (2^{-k}\xi ). \end{aligned}$$ We call such a sequence \((\varphi _k)_{k\in \mathbb {N}_0}\) smooth dyadic resolution of unity. Let E be a Banach space and let \((\varphi _k)_{k\in \mathbb {N}_0}\) be a smooth dyadic resolution of unity. On the space of E-valued tempered distributions \(\mathscr {S}'(\mathbb {R}^n;E)\), we define the sequence of operators \((S_k)_{k\in \mathbb {N}_0}\) by means of $$\begin{aligned} S_kf:=\mathscr {F}^{-1}\varphi _k\mathscr {F} f\quad (f\in \mathscr {S}'(\mathbb {R}^n;E)). \end{aligned}$$ The sequence \((S_k f)_{k\in \mathbb {N}_0}\) is called dyadic decomposition of f. By construction, we have that \(\mathscr {F}(S_k f)\) has compact support so that \(S_k f\) is an analytic function by the Paley–Wiener theorem, see [17, Theorem 2.3.21]. Moreover, it holds that \(\sum _{k\in \mathbb {N}_0} \varphi _k=1\) so that we have \(f=\sum _{k\in \mathbb {N}_0} S_kf\), i.e., f is the limit of a sequence of analytic functions where the limit is taken in the space of tempered distributions. Elements of Besov and Triebel–Lizorkin spaces even have convergence in a stronger sense, as their definition shows: Let \((\varphi _k)_{k\in \mathbb {N}_0}\) be a smooth dyadic resolution of unity. Let further E be a Banach space, w a weight, \(s\in \mathbb {R}\), \(p\in [1,\infty )\) and \(q\in [1,\infty ]\). We define the weighted Besov space \(B^s_{p,q}(\mathbb {R}^n,w;E)\) by $$\begin{aligned} B^s_{p,q}(\mathbb {R}^n,w;E):=\{f\in \mathscr {S}'(\mathbb {R}^n,E): \Vert f\Vert _{B^s_{p,q}(\mathbb {R}^n,w;E)}<\infty \} \end{aligned}$$ $$\begin{aligned} \Vert f\Vert _{B^s_{p,q}(\mathbb {R}^n,w;E)}:=\Vert (2^{sk}\mathscr {F}^{-1}\varphi _k\mathscr {F}f)_{k\in \mathbb {N}_0}\Vert _{\ell ^q(L_p(\mathbb {R}^n,w;E))}. \end{aligned}$$ We define the weighted Triebel–Lizorkin space \(F^s_{p,q}(\mathbb {R}^n,w;E)\) by $$\begin{aligned} F^s_{p,q}(\mathbb {R}^n,w;E):=\{f\in \mathscr {S}'(\mathbb {R}^n;E): \Vert f\Vert _{F^s_{p,q}(\mathbb {R}^n,w;E)}<\infty \} \end{aligned}$$ $$\begin{aligned} \Vert f\Vert _{F^s_{p,q}(\mathbb {R}^n,w;E)}:=\Vert (2^{sk}\mathscr {F}^{-1}\varphi _k\mathscr {F}f)_{k\in \mathbb {N}_0}\Vert _{L_p(\mathbb {R}^n,w;\ell ^q(E)))}. \end{aligned}$$ It is well known, that these spaces do not depend on the choice of the dyadic resolution of unity if w is an \(A_{\infty }\)-weight. In this case, different choices lead to equivalent norms, see for example [37, Proposition 3.4]. In fact, the condition on the weight can be weakened: In [41], it was shown that one also obtains the independence of the dyadic resolution of unity in the case of so-called \(A_{\infty }^{loc}\) weights. Let E be a reflexive Banach space, \(w\in [A_{\infty }]_p'\), \(s\in \mathbb {R}\) and \(p,q\in (1,\infty )\). We define the dual scales of Besov and Triebel–Lizorkin scale by $$\begin{aligned}&\mathcal {B}^{s}_{p,q}(\mathbb {R}^n,w;E):=(B^{-s}_{p',q'}(\mathbb {R}^n,w^{-\frac{1}{p-1}};E'))',\\&\quad \mathcal {F}^{s}_{p,q}(\mathbb {R}^n,w;E):=(F^{-s}_{p',q'}(\mathbb {R}^n,w^{-\frac{1}{p-1}};E'))', \end{aligned}$$ where \(p',q'\) denote the conjugated Hölder indices. Remark 2.13 The main reason for us to include the dual scales in our considerations is the following: If w is additionally an admissible weight in the sense of [42, Section 1.4.1.], then we have \(\mathcal {B}^{s}_{p,q}(\mathbb {R}^n,w)=B^{s}_{p,q}(\mathbb {R}^n,w)\) and \(\mathcal {F}^{s}_{p,q}(\mathbb {R}^n,w)=F^{s}_{p,q}(\mathbb {R}^n,w)\). Therefore, we can also treat weighted Besov and Triebel–Lizorkin spaces with weights that are outside the \(A_{\infty }\) range. Formulating this in terms of dual scales allows us to transfer Fourier multiplier theorems without any additional effort just by duality. The main example we have in mind will be \(w(x)=\langle x\rangle ^{d}\) with arbitrary \(d\in \mathbb {R}\). We will make use of this in a forthcoming paper on equations with boundary noise. Proposition 2.14 Recall that Assumption 1.2 holds true and suppose that E has cotype \(q_E\in [2,\infty )\). Let further \((S,\Sigma ,\mu )\) be a \(\sigma \)-finite measure space, \((\varepsilon _k)_{k\in \mathbb {N}}\) a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\), \(s\in \mathbb {R}\) and \(p_0,q_0\in (1,\infty )\). Consider one of the following cases: \(\mathscr {A}^\bullet \) stands for the Bessel potential scale and \(p\in (\max \{q_E,p_0\},\infty )\). Moreover, we allow \(p=\max \{q_E,p_0\}\) if \(q_E<p_0\) or if E is a Hilbert space and \(p_0=2\). \(\mathscr {A}^\bullet \) stands for the Besov scale and \(p\in (\max \{q_E,p_0,q_0\},\infty )\). Moreover, we allow \(p=\max \{q_E,p_0,q_0\}\) if \(q_E< p_0\le q_0\) or if E is a Hilbert space and \(p_0=q_0=2\). \(\mathscr {A}^\bullet \) stands for the Triebel–Lizorkin scale and \(p\in (\max \{q_E,p_0,q_0\},\infty )\). Moreover, we allow \(p=\max \{q_E,p_0,q_0\}\) if \(q_E< q_0\le p_0\) or if E is a Hilbert space and \(p_0=q_0=2\). Then, the images of balls with finite radius in \(L_p(S)\) under the embedding $$\begin{aligned} L_p(S)\hookrightarrow \mathcal {B}(\mathscr {A}^s,L_p(S;\mathscr {A}^s)), f\mapsto f\otimes (\,\cdot \,) \end{aligned}$$ are \(\mathcal {R}\)-bounded. More precisely, there is a constant \(C>0\) such that for all \(N\in \mathbb {N}\), \(g_1,\ldots ,g_N\in \mathscr {A}^s\) and all \(f_1,\ldots ,f_N\in L_p(S)\) we have the estimate $$\begin{aligned} \left\| \sum _{k=1}^N \varepsilon _k f_k \otimes g_k\right\| _{L_p(\Omega ;L_p(S;\mathscr {A}^s))}\le C\sup _{k=1,\ldots ,n}\Vert f\Vert _{L_p(S)}\left\| \sum _{k=1}^N \varepsilon _k g_k\right\| _{L_p(\Omega ;\mathscr {A}^s)}. \end{aligned}$$ The cases \(p\in (\max \{q_E,p_0\},\infty )\) in the Bessel potential case and \(p\in (\max \{q_E,p_0,q_0\},\infty )\) in the Besov and Triebel–Lizorkin case follow from the result by Hytönen and Veraar, Proposition 2.1, as in these cases \(\mathscr {A}^s\) has cotype \(\max \{q_E,p_0\}\) and \(\max \{q_E,p_0,q_0\}\), respectively, see for example [24, Proposition 7.1.4]. The Hilbert space cases follow directly since uniform boundedness and \(\mathcal {R}\)-boundedness coincide. The other cases in which \(p=\max \{q_E,p_0\}\) or \(p=\max \{q_E,p_0,q_0\}\) are allowed follow by Fubini's theorem together with the Kahane–Khintchine inequalities as in [25, Remark 3.4]. \(\square \) For the mapping properties, we derive later on, it is essential that we can use Mikhlin's multiplier theorem. There are many versions of this theorem available. For our purposes, the following will be sufficient. Theorem 2.15 Let E be a UMD space, \(p\in (1,\infty )\), \(s\in \mathbb {R}\) and let w be an \(A_p\) weight. Let \(m\in C^n(\mathbb {R}^n{\setminus }\{0\};\mathcal {B}(E))\) such that $$\begin{aligned} \kappa _m:=\mathcal {R}\big (\{|\xi |^{|\alpha |}D^{\alpha }m(\xi ):\xi \in \mathbb {R}^n{\setminus }\{0\},|\alpha |\le n\}\big )<\infty . \end{aligned}$$ Then, we have that $$\begin{aligned} \Vert \mathscr {F}^{-1} m \mathscr {F} \Vert _{\mathcal {B}(H^s_p(\mathbb {R}^n,w;E))}\le C\kappa _m \end{aligned}$$ with a constant \(C>0\) only depending on n, p and E. Suppose that E is a UMD space with Pisier's property \((\alpha )\). Let \(p\in (1,\infty )\) and \(w\in A_p(\mathbb {R}^n)\). Let further \(\mathcal {T}\subset C^{n}(\mathbb {R}^n{\setminus }\{0\},\mathcal {B}(E))\). Then, there is a constant \(C>0\) independent of \(\mathcal {T}\) such that $$\begin{aligned} \mathcal {R}_{\mathcal {B}(H^s_p(\mathbb {R}^n,w;E))}(\{\mathscr {F}^{-1}m\mathscr {F}:m\in \mathcal {T}\})\le C\kappa _{\mathcal {T}} \end{aligned}$$ $$\begin{aligned} \kappa _{\mathcal {T}}:=\mathcal {R}_{\mathcal {B}(E)}(\{|\xi |^{|\alpha |} D^{\alpha }m(\xi ):\xi \in \mathbb {R}^n{\setminus }\{0\},\alpha \in \mathbb {N}_0^n,|\alpha |\le n, m\in \mathcal {T}\}). \end{aligned}$$ Let E be a Banach space, \(p\in (1,\infty )\), \(q\in [1,\infty ]\) and \(s\in \mathbb {R}\). Let further w be an \(A_{\infty }\) weight, \(m\in C^{\infty }(\mathbb {R}^n,\mathcal {B}(E))\) and \(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E)\in \{B^s_{p,q}(\mathbb {R}^n,w;E),F^s_{p,q}(\mathbb {R}^n,w;E)\}\). Then, there is a natural number \(N\in \mathbb {N}\) and a constant \(C>0\) not depending on m such that $$\begin{aligned} \Vert \mathscr {F}^{-1} m \mathscr {F} \Vert _{\mathcal {B}(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E))}\le C\kappa _m \end{aligned}$$ $$\begin{aligned} \kappa _m:=\sup _{|\alpha |\le N}\sup _{\xi \in \mathbb {R}^n} \Vert \langle \xi \rangle ^{|\alpha |}D^{\alpha }m(\xi )\Vert _{\mathcal {B}(E)}. \end{aligned}$$ The same holds if E is reflexive, \(p,q\in (1,\infty )\), \(w\in [A_\infty ]_p'\) and \(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E)\in \{\mathcal {B}^s_{p,q}(\mathbb {R}^n,w;E),\mathcal {F}^s_{p,q}(\mathbb {R}^n,w;E)\}\). Let E be a Banach space, \(p\in (1,\infty )\), \(q\in [1,\infty ]\) and \(s\in \mathbb {R}\). Let further w be an \(A_{\infty }\) weight, \(\mathcal {T}\subset C^{\infty }(\mathbb {R}^n,\mathcal {B}(E))\) and \(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E)\in \{B^s_{p,q}(\mathbb {R}^n,w;E),F^s_{p,q}(\mathbb {R}^n,w;E)\}\). Then, there is an \(N\in \mathbb {N}\) and a constant \(C>0\) independent of \(\mathcal {T}\) such that $$\begin{aligned} \mathcal {R}(\{\mathscr {F}^{-1} m \mathscr {F} : m\in \mathcal {T}\}\le C \kappa _{\mathcal {T}} \end{aligned}$$ $$\begin{aligned} \kappa _{\mathcal {T}}:=\sup _{|\alpha |\le N}\mathcal {R}(\{\langle \xi \rangle ^{|\alpha |}D^{\alpha }m(\xi ):\xi \in \mathbb {R}^n,m\in \mathcal {T}\}). \end{aligned}$$ The same holds if E is a UMD space, \(p,q\in (1,\infty )\), \(w\in [A_\infty ]_p'\) and \(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E)\in \{\mathcal {B}^s_{p,q}(\mathbb {R}^n,w;E),\mathcal {F}^s_{p,q}(\mathbb {R}^n,w;E)\}\). Part (a) with \(s=0\) is contained in [15, Theorem 1.2]. The general case \(s\in \mathbb {R}\) follows from \(s=0\) by decomposing \(m(\xi )=\langle \xi \rangle ^{-s}m(\xi )\langle \xi \rangle ^{s}\) and by using the definition of Bessel potential spaces. Part (b) can be derived as [29, 5.2 (b)]. The scalar-valued, unweighted version of part (c) is contained in [45, Paragraph 2.3.7]. But the proof therein can be transferred to our situation by using [37, Proposition 2.4]. Part (d) is the isotropic version of [22, Lemma 2.4]. The statements concerning the dual scales follow by duality. In the \(\mathcal {R}\)-bounded case, we refer the reader to [24, Proposition 8.4.1]. \(\square \) For the dual scales in Theorem 2.15(d), it is actually not necessary to assume that E is a UMD space. Instead [24, Proposition 8.4.1] shows that K-convexity is good enough. But since we did not introduce K-convexity, we only stated the less general version here. Later on, we sometimes want to apply Mikhlin's theorem for m taking values in \(\mathcal {B}(E^N,E^M)\) with certain \(N,M\in \mathbb {N}\) instead of \(\mathcal {B}(E)\). Note however that we can identify \(\mathcal {B}(E^N,E^M)\simeq \mathcal {B}(E)^{M\times N}\). Hence, one can apply Mikhlin's theorem for each component and the statements of Theorem 2.15 transfer to the case in which m takes values in \(\mathcal {B}(E^N,E^M)\). Later on, we will also use parameter-dependent versions of our function spaces. They are natural to work with in the context of the parameter-dependent Boutet de Monvel calculus. And since we use elements of this parameter-dependent calculus, these spaces are also useful in our setting. Recall that Assumption 1.2 holds true. Let \(\mu \in \mathbb {C}\) and \(s,s_0\in \mathbb {R}\). Then, we define the parameter-dependent weighted spaces $$\begin{aligned}&\mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^n,w;E):=\langle D,\mu \rangle ^{s_0-s} \mathscr {A}^{s_0}(\mathbb {R}^n,w;E),\\&\quad \Vert \cdot \Vert _{\mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^n,w;E)}:=\Vert \langle D,\mu \rangle ^{s-s_0}\cdot \Vert _{\mathscr {A}^{s_0}(\mathbb {R}^n,w;E)}, \end{aligned}$$ where \(\langle D,\mu \rangle :=\mathscr {F}^{-1}\langle \xi ,\mu \rangle \mathscr {F}=\mathscr {F}^{-1}(1+|\xi |^2+|\mu |^2)^{1/2}\mathscr {F}\). Lemma 2.19 Let \(\mu \in \mathbb {C}\) and \(s,s_0\in \mathbb {R}\). We have the estimates $$\begin{aligned} \Vert \cdot \Vert _{\mathscr {A}^{s,\mu ,s_0}}\eqsim \Vert \cdot \Vert _{\mathscr {A}^{s}}+\langle \mu \rangle ^{s-s_0} \Vert \cdot \Vert _{\mathscr {A}^{s_0}}\qquad&\text {if }\quad s-s_0\ge 0,\\ \Vert \cdot \Vert _{\mathscr {A}^{s,\mu ,s_0}} \lesssim \Vert \cdot \Vert _{\mathscr {A}^{s}} \lesssim \langle \mu \rangle ^{s_0-s}\Vert \cdot \Vert _{\mathscr {A}^{s,\mu ,s_0}}\qquad&\text {if }\quad s-s_0\le 0. \end{aligned}$$ Assumption 1.2 is formulated in a way such that we can apply our versions of the Mikhlin multiplier theorem, Theorem 2.15(a) and (c). Let first \(s\ge s_0\). Note that the function $$\begin{aligned} m:\mathbb {R}^n\times \mathbb {C}\rightarrow \mathbb {R},\,(\xi ,\mu )\mapsto \frac{\langle \xi ,\mu \rangle ^{s-s_0}}{\langle \xi \rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0}} \end{aligned}$$ satisfies the condition from Theorem 2.15 uniformly in \(\mu \). Indeed, by induction it follows that \(\partial ^{\alpha }_{\xi }m(\xi ,\mu )\) \((\alpha \in \mathbb {N}_0^n)\) is a linear combination of terms of the form $$\begin{aligned} p_{\beta ,i,j,k}(\xi ,\mu )=\xi ^\beta \langle \xi ,\mu \rangle ^{s-s_0-i}\langle \xi \rangle ^{(s-s_0-2)j-k}(\langle \xi \rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0})^{-1-j} \end{aligned}$$ for some \(\beta \in \mathbb {N}_0^n\) and \(i,k,j\in \mathbb {N}_0\) such that \(|\alpha |=i+2j+k-|\beta |\). But each of these terms satisfies $$\begin{aligned} \langle \xi \rangle ^{|\alpha |}|p_{\beta ,i,j,k}(\xi ,\mu )|&=m(\xi ,\mu )\langle \xi \rangle ^{|\alpha |}|\xi ^\beta |\langle \xi ,\mu \rangle ^{-i}\langle \xi \rangle ^{(s-s_0-2)j-k}(\langle \xi \rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0})^{-j}\\&\lesssim m(\xi ,\mu )\langle \xi \rangle ^{|\alpha |+|\beta |-i-2j-k+(s-s_0)j-(s-s_0)j}\\&\lesssim m(\xi ,\mu )\\&\lesssim 1. \end{aligned}$$ Hence, \((m(\cdot ,\mu ))_{\mu \in \mathbb {C}}\) is a bounded family of Fourier multipliers. Using this, we obtain $$\begin{aligned} \Vert u\Vert _{\mathscr {A}^{s,\mu ,s_0}}&=\Vert \langle D,\mu \rangle ^{s-s_0}u\Vert _{\mathscr {A}^{s_0}}=\big \Vert m(D,\mu )(\langle D\rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0})u\big \Vert _{\mathscr {A}^{s_0}}\\&\lesssim \big \Vert (\langle D\rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0})u\big \Vert _{\mathscr {A}^{s_0}}\\&\le \Vert u\Vert _{\mathscr {A}^{s}}+\langle \mu \rangle ^{s-s_0}\Vert u\Vert _{\mathscr {A}^{s_0}}=\bigg \Vert \frac{\langle D\rangle ^{s-s_0}}{\langle D,\mu \rangle ^{s-s_0}}\langle D,\mu \rangle ^{s-s_0}u\bigg \Vert _{\mathscr {A}^{s_0}}\\&\quad +\bigg \Vert \frac{\langle \mu \rangle ^{s-s_0}}{\langle D,\mu \rangle ^{s-s_0}}\langle D,\mu \rangle ^{s-s_0}u\bigg \Vert _{\mathscr {A}^{s_0}}\\&\lesssim \Vert \langle D,\mu \rangle ^{s-s_0}u\Vert _{\mathscr {A}^{s_0}}= \Vert u\Vert _{\mathscr {A}^{s,\mu ,s_0}} \end{aligned}$$ for \(s-s_0\ge 0\) and $$\begin{aligned} \Vert u\Vert _{\mathscr {A}^{s,\mu ,s_0}}&=\Vert \langle D,\mu \rangle ^{s-s_0}u\Vert _{\mathscr {A}^{s_0}}=\bigg \Vert \frac{\langle D,\mu \rangle ^{s-s_0}}{\langle D\rangle ^{s-s_0}}\langle D\rangle ^{s-s_0}u\bigg \Vert _{\mathscr {A}^{s_0}}\lesssim \Vert \langle D\rangle ^{s-s_0}u\Vert _{\mathscr {A}^{s_0}}\\&=\Vert u\Vert _{\mathscr {A}^{s}}=\bigg \Vert \frac{\langle D\rangle ^{s-s_0}}{\langle D,\mu \rangle ^{s-s_0}}\langle D,\mu \rangle ^{s-s_0}u\bigg \Vert _{\mathscr {A}^{s_0}}\lesssim \langle \mu \rangle ^{s_0-s}\Vert u\Vert _{\mathscr {A}^{s,\mu ,s_0}} \end{aligned}$$ for \(s-s_0\le 0\). \(\square \) In this paper, we also consider function spaces on open intervals I. In this case, we can just define them by restriction. Let be \(I\subset \mathbb {R}\) an open interval. Then, we define the space \((\mathscr {A}^{\bullet }(I,w;E),\Vert \cdot \Vert _{\mathscr {A}^{\bullet }(I,w;E)})\) by $$\begin{aligned}&\mathscr {A}^{\bullet }(I,w;E)=\{f|_I:f\in \mathscr {A}^{\bullet }(I,w;E)\},\\&\quad \Vert g\Vert _{\mathscr {A}^{\bullet }(I,w;E)}:=\inf _{f\in \mathscr {A}^{\bullet }(\mathbb {R},w;E), f|_I=g} \Vert f\Vert _{\mathscr {A}^{\bullet }(\mathbb {R},w;E)}. \end{aligned}$$ We use the same definition for \(\mathscr {B}^{\bullet }\) and \(\mathscr {C}^{\bullet }\). Recall that we defined \(W^{-m}_p(\mathcal {O},w;E)\) as the dual of \(W^m_{p'}(\mathcal {O},w^{-\frac{1}{p-1}};E')\) and not by restriction. In the scalar-valued unweighted setting both definitions coincide, see [44, Section 2.10.2]. We believe that the same should hold true under suitable assumptions in the weighted vector-valued setting. But since this is not important for this work, we do not investigate this any further. Let \(s\in \mathbb {R}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(l\in \mathbb {N}\). Suppose that \(\mathscr {A}^{s}\) is reflexive. Then, we have the continuous embeddings $$\begin{aligned} L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r+lp};&\mathscr {A}^s)\hookrightarrow W^{-l}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r}; \mathscr {A}^s) \end{aligned}$$ We should note that almost the same proof was given in [30]. By duality, it suffices to prove $$\begin{aligned} W^{l}_{p',0}(\mathbb {R}_+,|{\text {pr}}_n|^{r'};(\mathscr {A}^{s})')\hookrightarrow L_{p'}(\mathbb {R}_+,|{\text {pr}}_n|^{r'-lp'};(\mathscr {A}^{s})') \end{aligned}$$ where \(r'=-\frac{r}{p-1}\) and \(p'=\frac{p}{p-1}\). But this is a special case of [32, Corollary 3.4]. \(\square \) Pseudo-differential operators in mixed scales Now we briefly introduce some notions and notations concerning pseudo-differential operators. Since we only use the x-independent case in the following, we could also formulate our results in terms of Fourier multipliers. However, parameter-dependent Hörmander symbol classes provide a suitable framework for the formulation of our results. In the case of parameter-dependent symbols, we oftentimes consider spaces of smooth functions on an open set \(U\subset \mathbb {R}^n\times \mathbb {C}\). In this case, we identify \(\mathbb {C}\simeq \mathbb {R}^2\) and understand the differentiability in the real sense. If we want to understand it in the complex sense, we say holomorphic instead of smooth. Let Z be a Banach space, \(d\in \mathbb {R}\), \(\Sigma \subset \mathbb {C}\) open and \(\vartheta :\Sigma \rightarrow (0,\infty )\) a function. The space of parameter-independent Hörmander symbols \(S^d(\mathbb {R}^n;Z)\) of order d is the space of all smooth functions \(p\in C^{\infty }(\mathbb {R}^n;Z)\) such that $$\begin{aligned} \Vert p\Vert ^{(d)}_k:=\sup _{\xi \in \mathbb {R}^n, \atop \alpha \in \mathbb {N}_0^n, |\alpha |\le k} \langle \xi \rangle ^{-(d-|\alpha |)} \Vert D^{\alpha }_{\xi }p(\xi )\Vert _{Z}<\infty \end{aligned}$$ for all \(k\in \mathbb {N}_0\). The space of parameter-dependent Hörmander symbols \(S^{d,\vartheta }(\mathbb {R}^n\times \Sigma ;Z)\) of order d is the space of all smooth functions \(p\in C^{\infty }(\mathbb {R}^n\times \Sigma ;Z)\) such that $$\begin{aligned} \Vert p\Vert ^{(d,\vartheta )}_k:=\sup _{\alpha \in \mathbb {N}_0^n,\,\gamma \in \mathbb {N}_0^2\atop |\alpha |+|\gamma |\le k}\sup _{\xi \in \mathbb {R}^n,\mu \in \Sigma } \vartheta (\mu )^{-1}\langle \xi ,\mu \rangle ^{-(d-|\alpha |_1-|\gamma |_1)} \Vert D^{\alpha }_{\xi }D_{\mu }^{\gamma }p(\xi ,\mu )\Vert _{Z}<\infty \end{aligned}$$ for all \(k\in \mathbb {N}_0\). If \(\vartheta =1\), then we also omit it in the notation. Actually, if one omits the weight function \(\vartheta \), then the latter symbol class is the special case of parameter-dependent Hörmander symbols with regularity \(\infty \). Usually, one also includes the regularity parameter \(\nu \) in the notation of the symbol class, so that the notation \(S^{d,\infty }(\mathbb {R}^n\times \Sigma ;Z)\) is more common in the literature. But since the symbols in this paper always have infinite regularity, we omit \(\infty \) in the notation. For the Bessel potential case, \(\mathcal {R}\)-bounded versions of these symbol classes are useful. Let E be a Banach space, \(N,M\in \mathbb {N}\), \(d\in \mathbb {R}\), \(\Sigma \subset \mathbb {C}\) open and \(\vartheta :\Sigma \rightarrow (0,\infty )\) a function. By \(S^d_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E^N,E^M))\), we denote the space of all smooth functions \(p\in C^{\infty }(\mathbb {R}^n;\mathcal {B}(E^N,E^M))\) such that $$\begin{aligned} \Vert p\Vert ^{(d)}_{k,\mathcal {R}}:=\mathcal {R}\big \{\langle \xi \rangle ^{-(d-|\alpha |_1)}D^{\alpha }_{\xi }p(\xi ):\xi \in \mathbb {R}^n, \alpha \in \mathbb {N}_0^n, |\alpha |\le k \big \}<\infty \end{aligned}$$ By \(S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M))\), we denote the space of all smooth functions \(p\in C^{\infty }(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M))\) such that $$\begin{aligned}&\Vert p\Vert ^{(d,\vartheta )}_{k,\mathcal {R}}:=\mathcal {R}\big \{\vartheta (\mu )^{-1}\langle \xi ,\mu \rangle ^{-(d-|\alpha |-|\gamma |)} D^{\alpha }_{\xi }D_{\mu }^{\gamma }p(\xi ,\mu ):\\&\xi \in \mathbb {R}^n,\mu \in \Sigma ,\alpha \in \mathbb {N}_0^n,\gamma \in \mathbb {N}_0^2,|\alpha |+|\gamma |\le k\big \}<\infty \end{aligned}$$ It seems like \(\mathcal {R}\)-bounded versions of the usual Hörmander symbol classes have first been considered in the Ph.D. thesis of Štrkalj. We also refer to [40]. It was observed in [10] that also the \(\mathcal {R}\)-bounded symbol classes are Fréchet spaces. Since uniform bounds can be estimated by \(\mathcal {R}\)-bounds, we have the continuous embeddings $$\begin{aligned}&S^d_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E^N,E^M))\hookrightarrow S^d(\mathbb {R}^n;\mathcal {B}(E^N,E^M)),\\&\quad S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M))\hookrightarrow S^{d,\vartheta }(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M)). \end{aligned}$$ Since uniform boundedness and \(\mathcal {R}\)-boundedness for a set of scalars are equivalent, we have that $$\begin{aligned} S^d(\mathbb {R}^n;\mathbb {C})\hookrightarrow S^d_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E^N)),\quad S^{d,\vartheta }(\mathbb {R}^n\times \Sigma ;\mathbb {C})\hookrightarrow S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N)). \end{aligned}$$ Given \(d_1,d_2\in \mathbb {R}\), \(\vartheta _1,\vartheta _2:\Sigma \rightarrow (0,\infty )\) and \(N_1,N_2,N_3\in \mathbb {N}\) we have the continuous bilinear mappings $$\begin{aligned}&S^{d_2}(\mathbb {R}^n;\mathcal {B}(E^{N_2},E^{N_3}))\times S^{d_1}(\mathbb {R}^n;\mathcal {B}(E^{N_1},E^{N_2}))\\&\rightarrow S^{d_1+d_2}(\mathbb {R}^n;\mathcal {B}(E^{N_1},E^{N_3})),\,(p_2,p_1)\mapsto p_2p_1,\\&S^{d_2,\vartheta _2}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N_2},E^{N_3}))\times S^{d_1,\vartheta _1}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N_1},E^{N_2}))\\&\qquad \rightarrow S^{d_1+d_2,\vartheta _1\cdot \vartheta _2}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N_1},E^{N_3})),\,(p_2,p_1)\mapsto p_2p_1. \end{aligned}$$ The same properties also hold for the \(\mathcal {R}\)-bounded versions. The differential operator \(\partial ^{\alpha }\) with \(\alpha \in \mathbb {N}_0^n\) is a continuous linear operator $$\begin{aligned} S^{d}(\mathbb {R}^n;\mathcal {B}(E^{N},E^{M}))\rightarrow S^{d-|\alpha |}(\mathbb {R}^n;\mathcal {B}(E^{N},E^{M})),\,p\mapsto \partial ^{\alpha }p,\\ S^{d,\vartheta }(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N},E^{M}))\rightarrow S^{d-|\alpha |,\vartheta }(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N},E^{M})),\,p\mapsto \partial ^{\alpha }p. \end{aligned}$$ One could also view parameter-independent symbol classes as a subset of parameter-dependent symbol classes with bounded \(\Sigma \subset \mathbb {C}\) which consists of those symbols which do not depend on the parameter \(\mu \). Hence, the statements we formulate for parameter-dependent symbol classes in the following also hold in a similar way in the parameter-independent case. Let E be a Banach space, \(d\in \mathbb {R}\) and \(\Sigma \subset \mathbb {C}\) open. Let further \(p\in S^d(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N},E^{M}))\). Then, we define the corresponding pseudo-differential operator by $$\begin{aligned} (P_{\mu }f)(x):= & {} ({\text{ op }}[p(\,\cdot \,,\mu )]f)(x)\\:= & {} [\mathscr {F}^{-1}p(\,\cdot \,,\mu )\mathscr {F}f](x)=\frac{1}{(2\pi )^{n/2}}\int _{\mathbb {R}^n}e^{ix\xi }p(\xi ,\mu )\widehat{f}(\xi )\,d\xi . \end{aligned}$$ for \(f\in \mathscr {S}(\mathbb {R}^n;E^N)\). Since we only consider x-independent symbols, the mapping properties of such pseudo-differential operators are an easy consequence of Mikhlin's theorem. Let \(N,M\in \mathbb {N}\), \(s,s_0,d\in \mathbb {R}\), \(\Sigma \subset \mathbb {C}\) open and \(\vartheta :\Sigma \rightarrow (0,\infty )\) a function. Consider one of the following two cases \(\mathscr {A}^{\bullet }\) belongs to the Bessel potential scale and \(S^{d,\vartheta }_{\mathscr {A}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))=S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))\). \(\mathscr {A}^{\bullet }\) belongs to the Besov or the Triebel–Lizorkin scale and \(S^{d,\vartheta }_{\mathscr {A}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))=S^{d,\vartheta }(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))\). Then, the mapping $$\begin{aligned}&S^{d,\vartheta }_{\mathscr {A}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))\times \mathscr {A}^{s+d,\mu ,s_0}(\mathbb {R}^{n},w_0,E^N)\\&\rightarrow \mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0,E^M),\; (p,f)\mapsto {\text {op}}[p(\,\cdot \,,\mu )]f \end{aligned}$$ defined by extension from \(\mathscr {S}(\mathbb {R}^{n},E^N)\) to \(\mathscr {A}^{s+d,\mu ,s_0}(\mathbb {R}^{n},w_0,E^N)\) is bilinear and continuous. Moreover, there is a constant \(C>0\) independent of \(\vartheta \) such that $$\begin{aligned} \Vert {\text {op}}[p(\,\cdot \,,\mu )]f\Vert _{ \mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0,E^N)} \le C \vartheta (\mu )\Vert p\Vert ^{(d,\vartheta )}_N\Vert f\Vert _{ \mathscr {A}^{s+d,\mu ,s_0}(\mathbb {R}^{n},w_0,E^M)} \end{aligned}$$ for all \(\mu \in \Sigma \). It is obvious that the mapping is bilinear. For the continuity, we note that $$\begin{aligned}&S^{d,\vartheta }_{\mathscr {A}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M))\\&\rightarrow S^{0,\vartheta }_{\mathscr {A}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M)), p\mapsto [(\xi ,\mu )\mapsto p(\xi ,\mu )\langle \xi ,\mu \rangle ^{-d}] \end{aligned}$$ is continuous. Hence, by Mikhlin's theorem there is an \(N'\in \mathbb {N}\) such that $$\begin{aligned} \Vert {\text {op}}[p(\,\cdot \,,\mu )]f\Vert _{ \mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0,E^M)}&=\Vert {\text {op}}[\langle \cdot ,\mu \rangle ^{s_0-s}]\\&{\text {op}}[p(\,\cdot \,,\mu )\langle \cdot ,\mu \rangle ^{-d}]{\text {op}}[\langle \cdot ,\mu \rangle ^{d+s-s_0}]f\Vert _{ \mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0,E^M)}\\&= \Vert {\text {op}}[p(\,\cdot \,,\mu )\langle \cdot ,\mu \rangle ^{-d}]{\text {op}}[\langle \cdot ,\mu \rangle ^{d+s-s_0}]f\Vert _{ \mathscr {A}^{s_0}(\mathbb {R}^{n},w_0,E^M)}\\&\lesssim \vartheta (\mu )\Vert p\Vert ^{(d,\vartheta )}_{N',\mathscr {A}}\Vert {\text {op}}[\langle \cdot ,\mu \rangle ^{d+s-s_0}]f\Vert _{ \mathscr {A}^{s_0}(\mathbb {R}^{n},w_0,E^N)}\\&= \vartheta (\mu )\Vert p\Vert ^{(d,\vartheta )}_{N',\mathscr {A}}\Vert f\Vert _{ \mathscr {A}^{s+d,\mu ,s_0}(\mathbb {R}^{n},w_0,E^N)}. \end{aligned}$$ This also shows the asserted estimate. \(\square \) We can also formulate an \(\mathcal {R}\)-bounded version of Proposition 3.5 without the parameter-dependence of the function spaces. Let \(N,M\in \mathbb {N}\), \(s,d\in \mathbb {R}\), \(\Sigma \subset \mathbb {C}\) open and \(\vartheta :\Sigma \rightarrow (0,\infty )\) a function. Consider one of the following two cases \(\mathscr {A}^{\bullet }\) belongs to the Bessel potential scale and E satisfies Pisier's property \((\alpha )\) in addition to Assumption 1.2. \(\mathscr {A}^{\bullet }\) belongs to the Besov or the Triebel–Lizorkin scale. $$\begin{aligned}&S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))\times \mathscr {A}^{s+d}(\mathbb {R}^{n},w_0;E^N)\\&\rightarrow \mathscr {A}^{s}(\mathbb {R}^{n},w_0;E^M),\; (p,f)\mapsto {\text {op}}[p(\,\cdot \,,\mu )]f \end{aligned}$$ defined by extension from \(\mathscr {S}(\mathbb {R}^n,E^N)\) to \(\mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0;E^N)\) is bilinear and continuous. Moreover, there is a constant \(C>0\) independent of \(\vartheta \) such that $$\begin{aligned} \mathcal {R}_{\mathcal {B}(\mathscr {A}^{s+d}(\mathbb {R}^{n},w_0;E^N),\mathscr {A}^{s}(\mathbb {R}^{n},w_0;E^M))}\big (\{\vartheta (\mu )^{-1}\langle \mu \rangle ^{-d_+}{\text {op}}[p(\,\cdot \,,\mu )]:\mu \in \Sigma \}\big )\le C \Vert p\Vert ^{(d,\vartheta )}_N. \end{aligned}$$ Note that \(m(\,\cdot \,,\mu ):=[\xi \mapsto \langle \mu \rangle ^{-d_+}\langle \xi ,\mu \rangle ^d\langle \xi \rangle ^{-d}]\) satisfies Mikhlin's condition uniformly in \(\mu \). Indeed, by induction on \(|\alpha |\) one gets that \(\partial ^{\alpha }\langle \mu \rangle ^{-d_+}\langle \xi ,\mu \rangle ^d\langle \xi \rangle ^{-d}\) is a linear combination of terms of the form $$\begin{aligned} p_{j,k}(\xi ,\mu )=\xi ^{\beta }\langle \xi ,\mu \rangle ^{d-2j}\langle \xi \rangle ^{-d-2k}\langle \mu \rangle ^{-d_+} \end{aligned}$$ for some \(\beta \in \mathbb {N}_0^{n-1}\), \(j,k\in \mathbb {N}_0\) with \(|\alpha |=2j+2k-|\beta |\). For such a term, we obtain $$\begin{aligned} \langle \xi \rangle ^{|\alpha |}|p_{j,k}(\xi ,\mu )|&=m(\xi ,\mu )|\xi ^{\beta }|\langle \xi ,\mu \rangle ^{-2j}\langle \xi \rangle ^{|\alpha |-2k}\\&\le m(\xi ,\mu )\langle \xi \rangle ^{|\alpha |+|\beta |-2j-2k}\\&\lesssim 1. \end{aligned}$$ Hence, by Mikhlin's theorem there is an \(N'\in \mathbb {N}\) such that $$\begin{aligned}&\mathcal {R}_{\mathcal {B}(\mathscr {A}^{s+d}(\mathbb {R}^{n-1},w_0;E^N),\mathscr {A}^{s}(\mathbb {R}^{n-1},w_0;E^M))}\big (\{\vartheta (\mu )^{-1}\langle \mu \rangle ^{-d_+}{\text {op}}[p(\,\cdot \,,\mu )]:\mu \in \Sigma \}\big )\\&\quad = \mathcal {R}_{\mathcal {B}(\mathscr {A}^{s+d}(\mathbb {R}^{n-1},w_0;E^N),\mathscr {A}^{s}(\mathbb {R}^{n-1},w_0;E^M))}\big (\{\vartheta (\mu )^{-1}\\&\quad {\text {op}}[p(\,\cdot \,,\mu )\langle \cdot ,\mu \rangle ^{-d}]{\text {op}}[\langle \mu \rangle ^{-d_+} \langle \cdot ,\mu \rangle ^{d}]:\mu \in \Sigma \}\big )\\&\quad \lesssim \mathcal {R}_{\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w_0;E^N),\mathscr {A}^{s}(\mathbb {R}^{n-1},w_0;E^M))}\big (\{\vartheta (\mu )^{-1}{\text {op}}[p(\,\cdot \,,\mu )\langle \cdot ,\mu \rangle ^{-d}]:\mu \in \Sigma \}\big )\\&\quad \lesssim \Vert p\Vert ^{(d,\vartheta )}_{N'}. \end{aligned}$$ (Iterated version of Mikhlin's theorem) Let \(s,k\in \mathbb {R}\) and let E be a Banach space. Consider one of the following cases with Assumption 1.2 in mind, \(\mathscr {B}\) being defined on \(I_{x_n}=\mathbb {R}\) and with \(m\in L_{\infty }(\mathbb {R}^n;\mathcal {B}(E))\) being smooth enough: Neither \(\mathscr {A}\) nor \(\mathscr {B}\) stands for the Bessel potential scale. For \(N\in \mathbb {N}_0\), we define $$\begin{aligned} \kappa _{m,N}:=\sup \big \{\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\Vert \partial _{\xi }^{\alpha }m(\xi ',\xi _n)\Vert _{\mathcal {B}(E)}: \alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$ \(\mathscr {A}\) stands for the Bessel potential scale and \(\mathscr {B}\) does not stand for the Bessel potential scale. For \(N\in \mathbb {N}_0\), we define $$\begin{aligned}&\kappa _{m,N}:=\sup _{\xi _n\in \mathbb {R},\alpha _n\in \mathbb {N}_0,\alpha _n\le N}\mathcal {R}\big \{|\xi '|^{|\alpha '|}\\&\langle \xi _n\rangle ^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ',\xi _n): \alpha '\in \mathbb {N}_0^{n-1},|\alpha '|\le N, \xi '\in \mathbb {R}^{n-1}\big \}. \end{aligned}$$ \(\mathscr {B}\) stands for the Bessel potential scale and \(\mathscr {A}\) does not stand for the Bessel potential scale. For \(N\in \mathbb {N}_0\), we define $$\begin{aligned} \kappa _{m,N}:=\mathcal {R}\big \{\langle \xi '\rangle ^{|\alpha '|}|\xi _n|^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ',\xi _n): \alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$ Both \(\mathscr {A}\) and \(\mathscr {B}\) stand for the Bessel potential scale and E satisfies Pisier's property \((\alpha )\). For \(N\in \mathbb {N}_0\), we define $$\begin{aligned} \kappa _{m,N}:=\mathcal {R}\big \{|\xi '|^{|\alpha '|}|\xi _n|^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ',\xi _n): \alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$ There is an \(N\in \mathbb {N}_0\) and a constant \(C>0\) independent of m such that $$\begin{aligned} \Vert {\text {op}}[m]\Vert _{\mathcal {B}(\mathscr {B}^{k}(\mathscr {A}^s))}\le C\kappa _{m,N}. \end{aligned}$$ First, we note that \({\text {op}}[\partial _{\xi _n}^{\alpha _n}m(\,\cdot ,\xi _n)]=\partial _{\xi _n}^{\alpha _n}{\text {op}}[m(\,\cdot ,\xi _n)]\), \(\alpha _n\in \mathbb {N}\), if m is smooth enough. Indeed, let \(\varepsilon >0\) be small enough and \(h\in (-\varepsilon ,\varepsilon )\). Then, we have $$\begin{aligned}&\big \Vert {\text {op}}\big [\tfrac{1}{h}(m(\,\cdot ,\xi _n+h)-m(\,\cdot ,\xi _n))-\partial _nm(\,\cdot ,\xi _n) \big ]\big \Vert _{\mathcal {B}(\mathscr {A}^s,\mathscr {A}^s)}\\&\quad \le C\sup _{\alpha '\in \mathbb {N}_0^{n-1},|\alpha '|\le N'}\sup _{\xi '\in \mathbb {R}^{n-1}}\Vert \langle \xi '\rangle ^{|\alpha '|}\partial _{\xi '}^{\alpha '}[\tfrac{1}{h}(m(\xi ',\xi _n+h)-m(\xi ',\xi _n))-\partial _nm(\xi ',\xi _n) ]\Vert _{\mathcal {B}(E)}\\&\quad = C\sup _{\alpha '\in \mathbb {N}_0^{n-1},|\alpha '|\le N'}\sup _{\xi '\in \mathbb {R}^{n-1}}\bigg \Vert \langle \xi '\rangle ^{|\alpha '|}\partial _{\xi '}^{\alpha '}\bigg [\int _0^1\partial _n m(\xi ',\xi _n+sh)-\partial _n m(\xi ',\xi _n)\,ds\bigg ]\bigg \Vert _{\mathcal {B}(E)}\\&\quad \le C\sup _{\alpha '\in \mathbb {N}_0^{n-1},|\alpha '|\le N'}\sup _{\xi '\in \mathbb {R}^{n-1}}\sup _{s\in [0,1]}\langle \xi '\rangle ^{|\alpha '|}\Vert \partial _{\xi '}^{\alpha '}\partial _n[m(\xi ',\xi _n+s h)-m(\xi ',\xi _n)]\Vert _{\mathcal {B}(E)} \end{aligned}$$ Now we can use the uniform continuity of $$\begin{aligned} \mathbb {R}^{n-1}\times (-\varepsilon ,\varepsilon )\rightarrow \mathcal {B}(E),\,(\xi ',h)\mapsto \langle \xi '\rangle ^{|\alpha '|}\partial _{\xi '}^{\alpha '}\partial _n m(\xi ',\xi _n+h)) \end{aligned}$$ to see that we have convergence to 0 as \(h\rightarrow 0\) in the above estimate. The uniform continuity follows from the boundedness of the derivatives (if m is smooth enough). For derivatives of order \(\alpha _n\ge 2\) we can apply the same argument to \(\partial _{\xi _n}^{\alpha _n-1}m\). The idea is now to apply Mikhlin's theorem twice. For example, in case (d) one obtains $$\begin{aligned} \Vert {\text {op}}[m]\Vert _{\mathcal {B}(\mathscr {B}^{k}(\mathscr {A}^s)))}&\lesssim \mathcal {R}_{\mathcal {B}(\mathscr {A}^s,\mathscr {A}^s)}\big (\{|\xi _n|^{\alpha _n}\partial _{\xi _n}^{\alpha _n}{\text {op}}[m(\,\cdot ,\xi _n)]:\alpha _n\in \mathbb {N},\alpha _n\le N_n,\xi _n\in \mathbb {R}\}\big )\\&=\mathcal {R}_{\mathcal {B}(\mathscr {A}^s,\mathscr {A}^s)}\big (\{{\text {op}}[|\xi _n|^{\alpha _n}\partial _{\xi _n}^{\alpha _n}m(\,\cdot ,\xi _n)]:\alpha _n\in \mathbb {N},\alpha _n\le N_n,\xi _n\in \mathbb {R}\}\big )\\&\lesssim \kappa _{m,N} \end{aligned}$$ by Theorem 2.15(b) for \(N_n,N\in \mathbb {N}_0\) large enough. The other cases are obtained analogously. \(\square \) There also is an \(\mathcal {R}\)-bounded version of Proposition 3.7 (Iterated \(\mathcal {R}\)-bounded version of Mikhlin's theorem) Let \(s,k\in \mathbb {R}\) and let E be a Banach space. Consider one of the following cases with Assumption 1.2 in mind and with \(\mathcal {M}\subset C^{\widetilde{N}}(\mathbb {R}^n{\setminus }\{0\};\mathcal {B}(E))\) with \(\widetilde{N}\in \mathbb {N}_0\) being large enough: $$\begin{aligned} \kappa _{\mathcal {M},N}:=\mathcal {R}\big \{\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ): m\in \mathcal {M},\alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$ \(\mathscr {A}\) stands for the Bessel potential scale, \(\mathscr {B}\) does not stand for the Bessel potential scale and E satisfies Pisier's property \((\alpha )\). For \(N\in \mathbb {N}_0\), we define $$\begin{aligned} \kappa _{\mathcal {M},N}:=\mathcal {R}\big \{|\xi '|^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ): m\in \mathcal {M},\alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$ \(\mathscr {B}\) stands for the Bessel potential scale, \(\mathscr {A}\) does not stand for the Bessel potential scale and E satisfies Pisier's property \((\alpha )\). For \(N\in \mathbb {N}_0\), we define $$\begin{aligned} \kappa _{\mathcal {M},N}:=\mathcal {R}\big \{\langle \xi '\rangle ^{|\alpha '|}|\xi _n|^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ): m\in \mathcal {M},\alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$ $$\begin{aligned} \kappa _{\mathcal {M},N}:=\mathcal {R}\big \{|\xi '|^{|\alpha '|}|\xi _n|^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ',\xi _n):m\in \mathcal {M}, \alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$ There is an \(N\in \mathbb {N}_0\) and a constant \(C>0\) such that $$\begin{aligned} \mathcal {R}(\{{\text {op}}[m]:m\in \mathcal {M}\})\le C\kappa _m\quad \text {in }\mathcal {B}(\mathscr {B}^{k}(\mathscr {A}^s)). \end{aligned}$$ This follows by the same proof as 3.7. One just has to use the \(\mathcal {R}\)-bounded versions of Mikhlin's theorem. \(\square \) (Lifting Property for Mixed Scales). Let \(s,k,t_0,t_1\in \mathbb {R}\). Then, $$\begin{aligned} \langle D_n\rangle ^{t_0}\langle D'\rangle ^{t_1}:\mathscr {B}^{k+t_0}(\mathscr {A}^{s+t_1}){\mathop {\rightarrow }\limits ^{\simeq }} \mathscr {B}^{k}(\mathscr {A}^{s}) \end{aligned}$$ is an isomorphism of Banach spaces If \(\mathscr {A}^\bullet \) or \(\mathscr {B}^{\bullet }\) belongs to the Bessel potential scale, then it follows from the definition of Bessel potential spaces that $$\begin{aligned} \langle D'\rangle ^{t_1} :\mathscr {A}^{s+t_1}{\mathop {\rightarrow }\limits ^{\simeq }} \mathscr {A}^{s}\quad \text {or} \quad \langle D_n\rangle ^{t_0}:\mathscr {B}^{k+t_0}(\mathscr {A}^{s}){\mathop {\rightarrow }\limits ^{\simeq }} \mathscr {B}^{k}(\mathscr {A}^{s}), \end{aligned}$$ respectively. In the other cases, this is the statement of [37, Proposition 3.9]. Composing the two mappings yields the assertion. \(\square \) Let \(s,k\in \mathbb {R}\) and \(t\ge 0\). Suppose that E has Pisier's property \((\alpha )\) if both \(\mathscr {A}\) and \(\mathscr {B}\) belong to the Bessel potential scale. Then, $$\begin{aligned} \langle D\rangle ^{t}:\mathscr {B}^{k+t}(\mathscr {A}^{s})\cap \mathscr {B}^{k}(\mathscr {A}^{s+t}){\mathop {\rightarrow }\limits ^{\simeq }}\mathscr {B}^k(\mathscr {A}^s) \end{aligned}$$ is an isomorphism of Banach spaces. By the assumptions we imposed on \(\mathscr {B}^{\bullet }(\mathscr {A}^\bullet )\), we can apply Mikhlin's theorem. We define $$\begin{aligned} f:\mathbb {R}^{n}\rightarrow \mathbb {R},\;\xi \mapsto \frac{\langle \xi \rangle ^{t}}{\langle \xi '\rangle ^{t}+\langle \xi _n\rangle ^{t}} \end{aligned}$$ which satisfies the assumptions of Proposition 3.7. Indeed, by induction we have that \(\partial _{\xi }^{\alpha }f\) is a linear combination of terms of the form $$\begin{aligned} \xi ^{\beta }\langle \xi \rangle ^{t-i'-i_n}\langle \xi '\rangle ^{(t-2)j'-k'}\langle \xi _n\rangle ^{(t-2)j_n-k_n}(\langle \xi '\rangle ^t+\langle \xi _n\rangle ^t)^{-1-j'-j_n} \end{aligned}$$ for some \(\beta \in \mathbb {N}_0^n\), \(i',i_n,j',j_n,k',k_n\in \mathbb {N}_0\) such that \(\alpha _n=i_n+2j_n+k_n-\beta _n\) and \(|\alpha '|=i'+2j'+k'-|\beta '|\). But for such a term we have that $$\begin{aligned}&|\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\xi ^{\beta }\langle \xi \rangle ^{t-i'-i_n}\langle \xi '\rangle ^{(t-2)j'-k'}\langle \xi _n\rangle ^{(t-2)j_n-k_n}(\langle \xi '\rangle ^t+\langle \xi _n\rangle ^t)^{-1-j'-j_n}|\\&\quad = |f(\xi )\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\xi ^{\beta }\langle \xi \rangle ^{-i'-i_n}\langle \xi '\rangle ^{(t-2)j'-k'}\langle \xi _n\rangle ^{(t-2)j_n-k_n}(\langle \xi '\rangle ^t+\langle \xi _n\rangle ^t)^{-j'-j_n}|\\&\quad \le f(\xi )[\langle \xi '\rangle ^{|\alpha '|+|\beta '|-i'-2j'-k'}\langle \xi _n\rangle ^{\alpha _n+\beta _n-i_n-2j_n-k_n}][\langle \xi '\rangle ^{tj'}\langle \xi _n\rangle ^{tj_n}(\langle \xi '\rangle ^t+\langle \xi _n\rangle ^t)^{-j'-j_n}]\\&\quad \le f(\xi )\\&\quad \le 1. \end{aligned}$$ This shows that f satisfies the assumptions of Proposition 3.7. Therefore, we obtain $$\begin{aligned} \Vert \langle D\rangle ^{t}u\Vert _{\mathscr {B}^k(\mathscr {A}^s)}&=\Vert f(D)(\langle D'\rangle ^{t}+\langle D_n\rangle ^{t})u \Vert _{\mathscr {B}^k(\mathscr {A}^s)}\lesssim \Vert (\langle D'\rangle ^{t}+\langle D_n\rangle ^{t})u \Vert _{\mathscr {B}^k(\mathscr {A}^s)}\\&\lesssim \max \{\Vert u \Vert _{\mathscr {B}^{k+t}(\mathscr {A}^{s})},\Vert u \Vert _{\mathscr {B}^{k}(\mathscr {A}^{s+t})}\} \end{aligned}$$ $$\begin{aligned} \max \{\Vert u \Vert _{\mathscr {B}^{k+t}(\mathscr {A}^{s})},\Vert u \Vert _{\mathscr {B}^{k}(\mathscr {A}^{s+t})}\}&\le \Vert \langle D_n\rangle ^t u \Vert _{\mathscr {B}^{k}(\mathscr {A}^{s})}+\Vert \langle D'\rangle ^t u \Vert _{\mathscr {B}^{k}(\mathscr {A}^{s})}\\&= \bigg \Vert \frac{\langle D'\rangle ^{t}}{\langle D\rangle ^{t}}\langle D\rangle ^{t}u \bigg \Vert _{\mathscr {B}^k(\mathscr {A}^s)}+\bigg \Vert \frac{\langle D_n\rangle ^{t}}{\langle D\rangle ^{t}}\langle D\rangle ^{t}u \bigg \Vert _{\mathscr {B}^k(\mathscr {A}^s)}\\&\lesssim \Vert \langle D\rangle ^{t}u \Vert _{\mathscr {B}^k(\mathscr {A}^s)}. \end{aligned}$$ This proves the assertion. \(\square \) Let \(s,k,d\in \mathbb {R}\). Let $$\begin{aligned} S^d_{\mathscr {B},\mathscr {A}}(\mathbb {R}^n,\mathcal {B}(E)):={\left\{ \begin{array}{ll} S^d(\mathbb {R}^n,\mathcal {B}(E))&{}\quad \text { if neither }\mathscr {A}\text { nor }\mathscr {B}\\ &{} \qquad \qquad \text { stands for the Bessel potential scale},\\ S^d_{\mathcal {R}}(\mathbb {R}^n,\mathcal {B}(E))&{}\quad \text { otherwise}. \end{array}\right. } \end{aligned}$$ Suppose that E has Pisier's property \((\alpha )\) if both \(\mathscr {A}\) and \(\mathscr {B}\) belong to the Bessel potential scale If \(d\le 0\), then $$\begin{aligned} S^d_{\mathscr {B},\mathscr {A}}(\mathbb {R}^n,\mathcal {B}(E)) \times \mathscr {B}^k(\mathscr {A}^s) \rightarrow \mathscr {B}^{k-d}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s-d}),\;(p,u)\mapsto {\text {op}}[p] u \end{aligned}$$ is bilinear and continuous. If \(d\ge 0\), then $$\begin{aligned} S^d_{\mathscr {B},\mathscr {A}}(\mathbb {R}^n,\mathcal {B}(E)) \times (\mathscr {B}^{k+d}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s+d})) \rightarrow \mathscr {B}^k(\mathscr {A}^s),\;(p,u)\mapsto {\text {op}}[p] u \end{aligned}$$ By writing \(p(\xi )=\frac{p(\xi )}{\langle \xi \rangle ^d}\langle \xi \rangle ^d\) and using Proposition 3.10, we only have to treat the case \(d=0\). But this case is included in the iterated version of Mikhlin's theorem, Proposition 3.7. Indeed, for a symbol \(p\in S^0(\mathbb {R}^n,\mathcal {B}(E))\) we have $$\begin{aligned} \sup _{\xi \in \mathbb {R}^n\atop \alpha \in \mathbb {N}_0^n,|\alpha |_1\le k} \langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{|\alpha _n|}\Vert \partial _{\xi }^{\alpha }p(\xi )\Vert _{\mathcal {B}(E)}\le \sup _{\xi \in \mathbb {R}^n\atop \alpha \in \mathbb {N}_0^n,|\alpha |\le k} \langle \xi \rangle ^{|\alpha |}\Vert \partial _{\xi }^{\alpha }p(\xi )\Vert _{\mathcal {B}(E)}<\infty \end{aligned}$$ for all \(k\in \mathbb {N}_0\). If \(p\in S^0_{\mathcal {R}}(\mathbb {R}^n,\mathcal {B}(E))\), we can use Kahane's contraction principle in order to obtain $$\begin{aligned}&\mathcal {R}\big \{\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{|\alpha _n|}\partial _{\xi }^{\alpha }p(\xi ): \alpha \in \mathbb {N}_0^n,|\alpha |\le k,\xi \in \mathbb {R}^n\big \}\\&\quad \le \mathcal {R}\big \{\langle \xi \rangle ^{|\alpha |}\partial _{\xi }^{\alpha }p(\xi ): \alpha \in \mathbb {N}_0^n,|\alpha |\le k,\xi \in \mathbb {R}^n\big \}. \end{aligned}$$ Poisson operators in mixed scales Consider Eq. (1-1) with \(f=0\), i.e., $$\begin{aligned} \lambda u -A(D)u&=0\quad \;\text {in }\mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\mathbb {R}^{n-1}. \end{aligned}$$ Recall that we always assume that the ellipticity condition and the Lopatinskii–Shapiro condition are satisfied in the sector \(\Sigma _{\phi '}{\setminus }\{0\}\) with \(\phi '\in (0,\pi )\) and that \(\phi \in (0,\phi ')\). The solution operators of (4-1) which map the boundary data \(g=(g_1,\ldots ,g_m)\) to the solution u are called Poisson operators. This notion comes from the Boutet de Monvel calculus, where Poisson operators are part of the so-called singular Green operator matrices. These matrices were introduced to extend the idea of pseudo-differential operators to boundary value problems. They allow for a unified treatment of boundary value problems and their solution operators, since both of them are contained in the algebra of singular Green operator matrices. In this work however, we do not need this theory in full generality. Instead, we just focus on Poisson operators. We will use a solution formula for the Poisson operator corresponding to (4-1) which was derived in the classical work [8, Proposition 6.2] by Denk, Hieber and Prüss. In order to derive this formula, a Fourier transform in the tangential directions of (4-1) is applied. This yields a linear ordinary differential equation of order 2m at each point in the frequency space. This ordinary differential equation is then transformed to a linear first-order system which can easily be solved by an exponential function if one knows the values of $$\begin{aligned} U(\xi ',0):=(\mathscr {F}'u(\xi ',0),\partial _n\mathscr {F}'u(\xi ',0),\ldots ,\partial _n^{2m-1}\mathscr {F}'u(\xi ',0)). \end{aligned}$$ The Lopatinskii–Shapiro condition ensures that those vectors \(U(\xi ',0)\) which yield a stable solution can be uniquely determined from \((\mathscr {F}'g_1,\ldots ,\mathscr {F}'g_m)\). The operator which gives this solution is denoted by $$\begin{aligned} M(\xi ',\lambda ):E^m\rightarrow E^{2m},\,(\mathscr {F}'g_1(\xi '),\ldots ,\mathscr {F}'g_m(\xi '))\mapsto U(\xi ',0). \end{aligned}$$ Now one just has to take the inverse Fourier transform of the solution and a projection to the first component. The latter is necessary to come back from the solution of the first-order system to the solution of the higher-order equation. This would already be enough to derive a good solution formula. However, in [8] an additional rescaling was introduced so that compactness arguments can be applied. More precisely, the variables \(\rho (\xi ',\lambda )=\langle \xi ',|\lambda |^{1/2m}\rangle =(1+|\xi '|^2+|\lambda |^{1/m})^{1/2}\), \(b=\xi '/\rho \) and \(\sigma =\lambda /\rho ^{2m}\) are introduced in the Fourier image. The solution formula is then written in terms of \((\rho ,b,\sigma )\) instead of \((\xi ',\lambda )\). For this reformulation, it is crucial that the operators \(A,B_1,\ldots ,B_m\) are homogeneous, i.e., that there are no lower-order terms. And even though this rescaling makes the formulas more involved, the compactness arguments which can be used as a consequence are very useful. After carrying out all these steps, the solution can be represented by $$\begin{aligned} u(x)={\text {pr}}_1[{\text {Poi}}(\lambda )g](x) \end{aligned}$$ \({\text {pr}}_1:E^{2m}\rightarrow E,\,w\mapsto w_1\) is the projection onto the first component, \(g=(g_1,\ldots ,g_m)^T\) and the operator \({\text {Poi}}(\lambda ):E^m\rightarrow E^{2m}\) is given by $$\begin{aligned} \left[ {\text {Poi}}(\lambda )g\right] (x):=\big [(\mathscr {F}')^{-1} e^{i\rho A_0(b,\sigma )x_n}M(b,\sigma )\hat{g}_{\rho }\big ](x'). \end{aligned}$$ \(\mathscr {F}'\) is the Fourier transform along \(\mathbb {R}^{n-1}\), i.e., in tangential direction, \(A_0\) is a smooth function with values in \(\mathcal {B}(E^{2m},E^{2m})\) which one obtains from \(\lambda -A(\xi ',D_n)\) after a reduction to a first-order system, M is a smooth function with values in \(\mathcal {B}(E^{m},E^{2m})\) which maps the values of the boundary operators applied to the stable solution v to the vector containing its derivatives at \(x_n=0\) up to the order \(2m-1\), i.e., $$\begin{aligned} (B_1(D)v(0),\ldots , B_m(D)v(0))^T\mapsto (v(0),\partial _n v(0),\ldots , \partial _n^{2m-1} v(0))^T, \end{aligned}$$ \(\rho \) is a positive parameter that can be chosen in different ways in dependence of \(\xi '\) and \(\lambda \). In our case, it will be given by \(\rho (\xi ',\lambda )=\langle \xi ',|\lambda |^{1/2m}\rangle =(1+|\xi '|^2+|\lambda |^{1/m})^{1/2}\), \(b=\xi '/\rho \), \(\sigma =\lambda /\rho ^{2m}\) and \(\hat{g}_{\rho }=((\mathscr {F}'g_1)/\rho ^{m_1},\ldots ,(\mathscr {F}'g_m)/\rho ^{m_m})^T\), Again, we want to emphasize that b, \(\sigma \), and \(\rho \) depend on \(\xi '\) and \(\lambda \). We only neglect this dependence in the notation for the sake of readability. Another operator that we will use later is the spectral projection \(\mathscr {P}_{-}\) of the matrix \(A_0\) to the part of the spectrum that lies above the real line. This spectral projection has the property that \(\mathscr {P}_-(b,\sigma )M(b,\sigma )=M(b,\sigma )\). For our purposes, we will rewrite the above representation in the following way: For \(j=1,\ldots ,m\) we write $$\begin{aligned} M_{\rho ,j}(b,\sigma )\hat{g}_j := M(b,\sigma )\frac{\hat{g}_j\otimes e_j}{\rho ^{m_j}}, \end{aligned}$$ where \(\hat{g}_j\otimes e_j\) denotes the m-tuple whose j-th component equals to \(\hat{g}_j\) and whose other components all equal to 0, as well as $$\begin{aligned}{}[{\text {Poi}}_j(\lambda )g_j](x):=\big [(\mathscr {F}')^{-1} e^{i\rho A_0(b,\sigma )x_n}M_{\rho ,j}(b,\sigma )\hat{g}_j\big ](x') \end{aligned}$$ so that we obtain $$\begin{aligned} u= {\text {pr}}_1{\text {Poi}}(\lambda )g={\text {pr}}_1\sum _{j=1}^m{\text {Poi}}_j(\lambda )g_j. \end{aligned}$$ If we look at Formula (4-2), we can already see that the solution operator is actually just an exponential function in normal direction. As such, it should be arbitrarily smooth. Of course, one has to think about with respect to which topology in tangential direction this smoothness should be understood. It is the aim of this section to analyze this carefully. We treat (4-2) as a function of \(x_n\) with values in the space of pseudo-differential operators in tangential direction. Since (4-2) is exponentially decaying in \(\xi '\) if \(x_n>0\), the pseudo-differential operators will have order \(-\infty \), i.e., they are smoothing. Hence, the solutions will also be arbitrarily smooth, no matter how rough the boundary data is. However, the exponential decay becomes slower as one approaches \(x_n=0\). This will lead to singularities if (4-2) is considered as a function of \(x_n\) with values in the space of pseudo-differential operators of a fixed order. In the following, we will study how strong these singularities are depending on the regularity in normal and tangential directions in which the singularities are studied. The answer will be given in Theorem 4.16. Therein, one may choose the regularity in normal direction k, the integrability in normal direction p, the regularity in tangential direction t of the solution and the regularity s of the boundary data. Then, the parameter r in the relation \(r-p[t+k-m_j-s]_+>-1\) gives a description of the singularity at the boundary, since it is the power of the power weight which one has to add to the solution space such that the Poisson operator is a well-defined continuous operator between the spaces one has chosen. In the following, we oftentimes substitute \(\mu =\lambda ^{1/2m}\) for homogeneity reasons. If \(\lambda \) is above the real line, then we take \(\mu \) to be the first of these roots, and if \(\lambda \) is below the real line, we take \(\mu \) to be the last of these roots. If \(\lambda >0\), then we just take the ordinary positive root. A domain \(\mathcal {O}\) is called plump with parameters \(R>0\) and \(\delta \in (0,1]\) if for all \(x\in \mathcal {O}\) and all \(r\in (0,R]\) there exists a \(z\in \mathcal {O}\) such that $$\begin{aligned} B(z,\delta r)\subset B(x,r)\cap \mathcal {O}. \end{aligned}$$ Let \(E_0,E_1\) be a Banach spaces. As described above, we take \(\mu =\lambda ^{1/2m}\) so that \(\rho (\xi ',\mu )=\langle \xi ',\mu \rangle \), \(b=\xi '/\rho \) and \(\sigma =\mu ^{2m}/\rho ^{2m}\). Let \(U\subset \mathbb {R}^{n-1}\times \Sigma _{\phi }\) be a plump and bounded environment of the range of \((b,\sigma )\). Then, the mapping $$\begin{aligned} BUC^{\infty }(U,\mathcal {B}(E_0,E_1))\rightarrow S^0_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E_0,E_1)), A \mapsto A\circ (b,\sigma ) \end{aligned}$$ is well defined and continuous. A similar proof was carried out in [22, Proposition 4.21]. We combine this proof with [24, Theorem 8.5.21] in order to obtain the \(\mathcal {R}\)-bounded version. Let \(A\in BUC^{\infty }(U,\mathcal {B}(E_0,E_1))\). By induction on \(|\alpha '|+|\gamma |\), we show that \(D_{\xi '}^{\alpha '}D_{\mu }^\gamma (A\circ (b,\sigma ))\) is a linear combination of terms of the form \((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f\) with \(f\in S^{-|\alpha '|-|\gamma |}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m})\), \(\widetilde{\alpha }'\in \mathbb {N}_0^{n-1}\) and \(\widetilde{\gamma }\in \mathbb {N}_0\). It follows from [24, Theorem 8.5.21] that this is true for \(|\alpha '|+|\gamma |=0\). So let \(j\in \{1,\ldots ,n-1\}\). By the induction hypothesis, we have that \(D_{\xi '}^{\alpha '}D_{\mu }^\gamma (A\circ (b,\sigma ))\) is a linear combination of terms of the form \((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f\) with \(f\in S^{-|\alpha '|-|\gamma |}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m})\), \(\widetilde{\alpha }'\in \mathbb {N}_0^{n-1}\) and \(\widetilde{\gamma }\in \mathbb {N}_0\). Hence, for \(D_{\xi _j}D_{\xi '}^{\alpha '}D_{\mu }^{\gamma }\) it suffices to treat the summands separately, i.e., we consider \(D_{\xi _j}((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f)\). By the product rule and the chain rule, we have $$\begin{aligned}&D_{\xi _j}((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f)\\&\quad =(D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma ))(D_jf)+\bigg (\sum _{l=2}^{n}D_{\xi _j}( \tfrac{\xi '_l}{\rho })\cdot f \cdot [(D_l D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }} A)\circ (b,\sigma )]\bigg )\\&\qquad +D_{\xi _j}( \tfrac{\mu ^{2m}}{\rho ^{2m}})\cdot f \cdot [(D_l D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }} A)\circ (b,\sigma )]. \end{aligned}$$ By the induction hypothesis and Remark 3.3 (e) and (f) we have that $$\begin{aligned} (D_{\xi _j}f),(D_{\xi _j}\frac{\xi _1'}{\rho })f,\ldots ,(D_{\xi _j} \frac{\xi '_{n-1}}{\rho })f,(D_{\xi _j} \tfrac{\mu ^{2m}}{\rho ^{2m}}) f\in S^{-|\alpha '|-|\gamma |-1}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m}). \end{aligned}$$ The same computation for \(D_{\mu _1}\) and \(D_{\mu _2}\) instead of \(D_{\xi _j}\) also shows the desired behavior and hence, the induction is finished. Now we use [24, Theorem 8.5.21] again: Since U is plump we have that A and all its derivatives have an \(\mathcal {R}\)-bounded range on U. Therefore, the terms \((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f\) from above satisfy $$\begin{aligned} \mathcal {R}_{\mathcal {B}(E_0,E_1)}\{\langle \xi ',\mu \rangle ^{|\alpha '|+|\gamma |}(D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f(\xi ',\mu ):(\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}<\infty \end{aligned}$$ which shows the assertion. \(\square \) Corollary 4.4 There is a constant \(c>0\) such that the mapping $$\begin{aligned} \mathbb {R}_+\rightarrow S^{0}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E^{2m})), y\mapsto [(\xi ',\mu )\mapsto e^{cy}e^{iA_0(b,\sigma )y}\mathscr {P}_{-}(b,\sigma )] \end{aligned}$$ is bounded and and uniformly continuous. There are constants \(C,c>0\) such that $$\begin{aligned} \mathcal {R}(\{e^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma ):(\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\})<C \end{aligned}$$ for all \(x_n\ge 0\) By Lemma 4.3, it suffices to show that there is a plump environment U of the range \((b,\sigma )\) such that $$\begin{aligned} \mathbb {R}_+\rightarrow BUC^{\infty }(U,\mathcal {B}(E^{2m})), y\mapsto [(\xi ',\mu )\mapsto e^{cy}e^{iA_0(\xi ,\mu )y}\mathscr {P}_{-}(\xi ,\mu )] \end{aligned}$$ is bounded and continuous. We can for example take $$\begin{aligned} U=\big \{\big (\tfrac{\theta \xi '}{\rho },\tfrac{\theta \mu ^{2m}}{\rho ^{2m}}\big ): \xi '\in \mathbb {R}^{n-1},\,\mu \in \Sigma _{\phi /2m},\,\theta \in (\tfrac{1}{2},2)\big \}. \end{aligned}$$ Obviously, this set contains the range of \((b,\sigma )\) and it is smooth and relatively compact. By this compactness, it follows as in [8, Section 6] (mainly because of the spectral gap (6.11)) that there is a constant \(c>0\) such that $$\begin{aligned} \sup _{y\ge 0,\, (\xi ',\mu )\in U}\Vert e^{2cy}e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu ) \Vert _{\mathcal {B}(E^{2m})}<\infty . \end{aligned}$$ Now, we show by induction on \(|\alpha '|+|\gamma |\) that \(\partial _{\xi '}^{\alpha '}\partial _{\mu }^{\gamma }e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )\) is a linear combination of terms of the form $$\begin{aligned} f(\xi ',\mu ) e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu ) g(\xi ',\mu )y^p \end{aligned}$$ where \(f,g:\mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\rightarrow \mathcal {B}(E^{2m})\) are holomorphic and \(p\in \mathbb {N}_0\). Obviously this is true for \(|\alpha '|+|\gamma |=0\). For the induction step, we can directly use the induction hypothesis and consider a term of the form (4-4). Since $$\begin{aligned}&e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )=e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )^2\\&\quad =\mathscr {P}_-(\xi ',\mu )e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )=\mathscr {P}_-(\xi ',\mu )e^{i A_0(\xi ',\mu )y} \end{aligned}$$ one can directly verify that the derivatives \(\partial _{\xi _j}\partial _{\xi '}^{\alpha '}\partial _{\mu }^{\gamma }e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )\) \((j=1,\ldots ,n-1)\) and \(\partial _{\mu _i}\partial _{\xi '}^{\alpha '}\partial _{\mu }^{\gamma }e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )\) \((i\in \{1,2\})\) are again a linear combination of terms of the form (4-4). But for such a term, we have $$\begin{aligned} \Vert f e^{i A_0y}\mathscr {P}_- gy^p \Vert _{BUC(U,\mathcal {B}(E^{2m}))}\le C e^{-cy} \end{aligned}$$ for some constant \(C>0\) and all \(y\ge 0\). This shows that $$\begin{aligned} \big ([(\xi ',\mu )\mapsto e^{cy}e^{iA_0(\xi ,\mu )y}\mathscr {P}_{-}(\xi ,\mu )]\big )_{y\ge 0} \end{aligned}$$ satisfies the assumptions of Lemma 4.3 uniformly in y so that the boundedness follows. The continuity follows from applying the same argument to $$\begin{aligned} (e^{iA_0(\xi ,\lambda )h}-{\text {id}}_{E^{2m}})e^{iA_0(\xi ,\lambda )y}\mathscr {P}_-(\xi ',\mu ) \end{aligned}$$ for small |h|, \(h\in \mathbb {R}\). Note that \((\xi ,\lambda )\) only runs through a relatively compact set again so that \(e^{iA_0(\xi ,\lambda )h}-{\text {id}}_{E^{2m}}\rightarrow 0\) in \(BUC^{\infty }(U)\) as \(h\rightarrow \infty \). This follows from the first part by substituting \(y=\rho x_n\). Let \(n_1,n_2\in \mathbb {R}\) and \(c>0\). Moreover, let \(f_0\in S^{n_1}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E^{2m}))\) and \(g\in S_{\mathcal {R}}^{n_2}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E,E^{2m}))\). Then, for all \(\alpha \in \mathbb {N}_0^{n+1}\) we have that \(\partial _{\xi ',\mu }^{\alpha }f_0e^{c\rho x_n+i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma )g_0\) is a linear combination of terms of the form $$\begin{aligned} f_{\alpha }e^{c\rho x_n+i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma )g_{\alpha }x_n^k \end{aligned}$$ where \(f_{\alpha }\in S_{\mathcal {R}}^{n_1-d_1}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E^{2m}))\), \(g_{\alpha }\in S^{n_2-d_2}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E,E^{2m}))\) and \(k+d_1+d_2=|\alpha |\). This can be shown by induction on \(|\alpha |\). Using Lemma 4.3, the proof of [22, Lemma 4.22] carries over to our setting. \(\square \) Let \(\zeta ,\delta \ge 0\), \(k\in \mathbb {N}_0\) and \(\vartheta (c,x_n,\mu )=x_n^{-\delta }|\mu |^{-\zeta } e^{-c|\mu |x_n}\) for \(c,x_n,\mu \in \mathbb {R}_+\). For all \(l\in \mathbb {N}_0\), there are constants \(C,c>0\) such that $$\begin{aligned} \Vert (\xi ',\mu )\mapsto D_{x_n}^ke^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}M_j(b,\sigma )\tfrac{1}{\rho ^{m_j}}\Vert ^{(k+\zeta -m_j-\delta ,\vartheta (c/2,x_n,\cdot \,))}_{l,\mathcal {R}}<C \end{aligned}$$ for all \(x_n\in \mathbb {R}_+\). The mapping $$\begin{aligned}&\mathbb {R}_+\rightarrow S^{k+\zeta -m_j-\delta }_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m})), x_n\mapsto [(\xi ',\mu )\\&\mapsto x_n^{\delta } e^{c\rho x_n}D_{x_n}^ke^{i\rho A_0(b,\sigma )x_n}M_j(b,\sigma )\tfrac{1}{\rho ^{m_j}}] \end{aligned}$$ is continuous. If \(f\in BUC(\mathbb {R}_+,\mathbb {C})\) with \(f(0)=0\), then $$\begin{aligned}&\overline{\mathbb {R}_+}\rightarrow S^{k+\zeta -m_j-\delta }_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m})),\\&x_n\mapsto [(\xi ',\mu )\mapsto f(x_n)x_n^{\delta } e^{c\rho x_n}D_{x_n}^ke^{i\rho A_0(b,\sigma )x_n}M_j(b,\sigma )\tfrac{1}{\rho ^{m_j}}] \end{aligned}$$ is uniformly continuous. If \(\varepsilon \in (0,\delta ]\), then $$\begin{aligned}&\overline{\mathbb {R}_+}\rightarrow S^{k+\zeta +\varepsilon -m_j-\delta }_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m})),\\&x_n\mapsto [(\xi ',\mu )\mapsto x_n^{\delta } e^{c\rho x_n}D_{x_n}^ke^{i\rho A_0(b,\sigma )x_n}M_j(b,\sigma )\tfrac{1}{\rho ^{m_j}}] \end{aligned}$$ By Lemma 4.5, we have that \(D_{\xi '}^{\alpha '}D_{\mu }^{\gamma } e^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}M_{j}(b,\sigma )\frac{1}{\rho ^{m_j}}\) is a linear combination of terms of the form $$\begin{aligned} f_{\alpha ',\gamma }e^{c\rho x_n+i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma )g_{\alpha ',\gamma }x_n^{p} \end{aligned}$$ where \(f_{\alpha ',\gamma }\in S^{-d_1}(\mathbb {R}^{n-1}\times \mathbb {R}_+,\mathcal {B }( E^{2m},E^{2m}))\), \(g_{\alpha ',\gamma }\in S^{-m_{j}-d_2}(\mathbb {R}^{n-1}\times \mathbb {R}_+,\mathcal {B}(E,E^{2m}))\) and \(p+d_1+d_2=|\alpha '|+|\gamma |\). But for such a term, we have that $$\begin{aligned}&\mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}\rho ^{-k-\zeta +m_j+\delta +|\alpha '|+|\gamma |}D_{x_n}^kf_{\alpha ',l}(\xi ',\mu )e^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}\\&\qquad \mathscr {P}_-(b,\sigma )g_{\alpha ',l}(\xi ',\mu )x_n^p : (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big ) \\&\quad \le C \sum _{\widetilde{k}=0}^k \mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}\rho ^{-k-\zeta +m_j+\delta +|\alpha '|+|\gamma |}f_{\alpha ',l}(\xi ',\mu )e^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}\\&\qquad \mathscr {P}_-(b,\sigma )[c\rho +i\rho A_0(b,\sigma )]^{k-\widetilde{k}}g_{\alpha ',l}(\xi ',\mu )x_n^{[p-\widetilde{k}]_+}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad \le C \sum _{\widetilde{k}=0}^k\mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1} \rho ^{-\zeta +\delta -\widetilde{k}-d_1-d_2+|\alpha '|+|\gamma |}e^{-c\rho x_n}x_n^{[p-\widetilde{k}]_+}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad \le C \mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}x_n^{-\delta }|\mu |^{-\zeta }\rho ^{-d_1-d_2-p+|\alpha '|+|\gamma |}e^{-\tfrac{c}{2}\rho x_n}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad \le C. \end{aligned}$$ From the second to the third line, we used Lemma 4.3 and Corollary 4.4(b). This gives (4-5). Again, we consider a term of the form $$\begin{aligned} f_{\alpha ',\gamma }e^{i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma )g_{\alpha ',\gamma }x_n^{p} \end{aligned}$$ where \(f_{\alpha ',\gamma }\in S^{-d_1}(\mathbb {R}^{n-1}\times \mathbb {R}_+,\mathcal {B }( E^{2m},E^{2m}))\), \(g_{\alpha ',\gamma }\in S^{-m_{j}-d_2}(\mathbb {R}^{n-1}\times \mathbb {R}_+,\mathcal {B}(E,E^{2m}))\) and \(p+d_1+d_2=|\alpha '|+|\gamma |\). By the same computation as in part (a), we obtain $$\begin{aligned}&\mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}\rho ^{-k-\zeta +m_j+\delta +|\alpha '|+|\gamma |}D_{x_n}^kf_{\alpha ',l}\\&(\xi ',\mu )[e^{c\rho (x_n+h)+i\rho A_0(b,\sigma )(x_n+h)}-e^{c\rho x_n+i\rho A_0(b,\sigma )x_n}]:\\&\qquad (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\})\\&\quad \le C \mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}x_n^{-\delta }|\mu |^{-\zeta }\rho ^{-d_1-d_2-p+|\alpha '|+|\gamma |}[e^{c\rho h+i\rho A_0(b,\sigma )h}\\&\qquad -{\text {id}}_{E^{2m}}] e^{-\tfrac{3c}{4}\rho x_n}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad = C \mathcal {R}\big (\{[e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}] e^{-\tfrac{c}{4}\rho x_n}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad \le C \mathcal {R}\big (\{[e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}] e^{-\tfrac{c}{4}\rho x_n}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}, \rho \le \tfrac{1}{\sqrt{h}}\}\big ) \\&\qquad +C \mathcal {R}\big (\{[e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}] e^{-\tfrac{cx_n}{4\sqrt{h}}}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}, \rho \ge \tfrac{1}{\sqrt{h}}\}\big ). \end{aligned}$$ From Corollary 4.4 (a), it follows that the first \(\mathcal {R}\)-bound tends to 0 as \(h\rightarrow 0\). By Corollary 4.4 (b), it holds that $$\begin{aligned} \mathcal {R}(\{e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}:(\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}, \rho \ge \tfrac{1}{\sqrt{h}}\})<\infty \end{aligned}$$ and since \(x_n>0\) also the second \(\mathcal {R}\)-bound tends to 0 as \(h\rightarrow 0\). This shows the desired continuity. This follows by the same computation as in part (b). However, without f there would be no continuity at \(x_n=0\) as the second \(\mathcal {R}\)-bound $$\begin{aligned} \mathcal {R}\big (\{[e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}] e^{-\tfrac{cx_n}{2\sqrt{h}}}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}, \rho \ge \tfrac{1}{\sqrt{h}}\}\big ) \end{aligned}$$ does not tend to 0 as \(h\rightarrow 0\) for \(x_n=0\). By adding f though, we obtain the desired continuity. This follows from part (c) with \(f(x_n)=x_n^{\varepsilon }\) for \(x_n\) close to 0. Given a topological spaces \(Z_0,Z_1\) and \(z\in Z_0\), we now write $$\begin{aligned} {\text {ev}}_{z}:C(Z_0;Z_1)\rightarrow Z_1, f\mapsto f(z) \end{aligned}$$ for the evaluation map at z. Let \(k\in \mathbb {N}_0\), \(p_0\in (1,\infty )\), \(q_0\in [1,\infty ]\), \(\zeta \ge 0\) and \(s_0,s,\widetilde{t}\in \mathbb {R}\). There are constants \(C,c>0\) such that for all \(x_n>0\) and all \(\lambda \in \Sigma _{\phi }\) we have the parameter-dependent estimate $$\begin{aligned} \Vert [D_{x_n}^k{\text {Poi}}_j(\lambda )&f](\cdot ,x_n)\Vert _{\mathscr {A}^{\widetilde{t}+m_j-k-\zeta ,|\lambda |^{1/2m},s_0}(\mathbb {R}^{n-1},w,E^{2m})}\\ {}&\le Cx_n^{-[\widetilde{t}-s]_+}|\lambda |^{-\zeta /2m}e^{-c|\lambda |^{1/2m}x_n}\Vert f\Vert _{\mathscr {A}^{s,|\lambda |^{1/2m},s_0}(\mathbb {R}^{n-1},w,E)}\\&\quad (f\in \mathscr {S}(\mathbb {R}^{n-1},E)). \end{aligned}$$ There is a constant \(c>0\) such that for all \(\lambda \in \Sigma _{\phi }\) we have that $$\begin{aligned} K(\lambda ):=[x_n\mapsto x_n^{[\widetilde{t}-s]_+}|\lambda |^{\frac{\zeta -[-[\widetilde{t}-s]_+-m_j+k+\zeta ]_+}{2m}}e^{c|\lambda |^{1/2m} x_n}{\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda )] \end{aligned}$$ is an element of $$\begin{aligned} C_{\mathcal {R}B}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big ). \end{aligned}$$ Moreover, for all \(\sigma >0\) we have that the set \(\{K(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\) is \(\mathcal {R}\)-bounded in \(C_{\mathcal {R}B}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big )\). Let \(f\in BUC([0,\infty ),\mathbb {C})\) such that \(f(0)=0\). There is a constant \(c>0\) such that for all \(\lambda \in \Sigma _{\phi }\) we have that $$\begin{aligned} K_f(\lambda ):=[x_n\mapsto f(x_n)x_n^{[\widetilde{t}-s]_+}|\lambda |^{\frac{\zeta -[-[\widetilde{t}-s]_+-m_j+k+\zeta ]_+}{2m}}e^{c|\lambda |^{1/2m} x_n}{\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda )] \end{aligned}$$ $$\begin{aligned} BUC_{\mathcal {R}}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big ). \end{aligned}$$ Moreover, for all \(\sigma >0\) we have that the set \(\{K_f(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\) is \(\mathcal {R}\)-bounded in \(BUC_{\mathcal {R}}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big )\). Let \(\varepsilon >0\) and let \(K(\lambda )\) be defined as in Part (b). Then, \(K(\lambda )\) is an element of $$\begin{aligned} BUC_{\mathcal {R}}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-\varepsilon -k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big ). \end{aligned}$$ Moreover, for all \(\sigma >0\) we have that the set \(\{K(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\) is \(\mathcal {R}\)-bounded in \(BUC_{\mathcal {R}}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-\varepsilon -k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big )\). By Proposition 4.6, we have that $$\begin{aligned} \big (D_{x_n}^k e^{i\rho A_0(b,\sigma )x_n}\frac{M_{j}(b,\sigma )}{\rho ^{m_j}}\big )_{x_n>0}&\subset S^{\zeta +k-m_{j}-[\widetilde{t}-s]_+}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m}))\\ {}&\subset S^{\zeta +k-m_{j}-\widetilde{t}+s}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m})). \end{aligned}$$ Therefore, it follows from (4-5) together with the mapping properties for parameter-dependent pseudo-differential operators, Proposition 3.5, that \({\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda )\) maps \(\mathscr {A}^{s,|\lambda |^{1/2m},s_0}(\mathbb {R}^{n-1},w,E)\) into \(\mathscr {A}^{\widetilde{t}+m_j-k-\zeta ,|\lambda |^{1/2m},s_0}(\mathbb {R}^{n-1},w,E^{2m})\) with a bound on the operator norms which is given by \(Cx_n^{-[\widetilde{t}-s]_+}|\lambda |^{-\zeta /2m} e^{-c|\lambda |^{1/2m}x_n}\) for all \(\widetilde{t},s\in \mathbb {R}\), \(x_n>0\) and all \(\zeta \ge 0\). We use Proposition 4.6(a) together with Proposition 3.6. Then, we obtain $$\begin{aligned}&\mathcal {R}_{\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})}\big (\{x_n^{[\widetilde{t}-s]_+}|\lambda |^{\frac{\zeta -[-[\widetilde{t}-s]_+-m_j+k+\zeta ]_+}{2m}}e^{c|\lambda |^{1/2m}x_n}\\&\qquad {\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\\&\quad \le \mathcal {R}_{\mathcal {B}(\mathscr {A}^{\widetilde{t}-[\widetilde{t}-s]_+}(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))}\big (\{x_n^{[\widetilde{t}-s]_+}|\lambda |^{\frac{\zeta -[-[\widetilde{t}-s]_+-m_j+k+\zeta ]_+}{2m}}e^{c|\lambda |^{1/2m}x_n}\\&\qquad {\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\\&\quad \le C_{\sigma }. \end{aligned}$$ This shows that $$\begin{aligned} \mathcal {R}(\{[K(\lambda )](x_n):x_n\ge 0,\,\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \})<\infty \end{aligned}$$ in \(\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\) it remains to show that the \(K(\lambda )\) are \(\mathcal {R}\)-continuous. But this follows from the continuity statement in Proposition 4.6 (b) together with Proposition 3.6. This follows as Part (b) but with Proposition 4.6(c) instead of Proposition 4.6(b). This follows as Part (b) but with Proposition 4.6 (d) instead of Proposition 4.6(b). Consider the situation of Corollary 4.7 and let \(p\in [1,\infty )\), \(r\in \mathbb {R}\). In order to shorten the formulas, we write \(\gamma _1=r-p[\widetilde{t}-s]_+\) and \(\gamma _2=p\zeta -p[-[\widetilde{t}-s]_+-m_j+ k+\zeta ]_+\). Suppose that \(\gamma _1>-1\). Then, for all \(\sigma >0\) and all there is a constant \(C>0\) such that for all \(\lambda \in \Sigma _{\phi }\) with \(|\lambda |\ge \sigma \) and all \(f\in \mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)\) it holds that $$\begin{aligned} \Vert {\text{ Poi }}_{j}(\lambda )f\Vert _{W^{k}_{p}(\mathbb {R}_+,|{\text{ pr }}_n|^r,\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))}\le C|\lambda |^{\frac{-1-\gamma _1-\gamma _2}{2mp}} \Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}. \end{aligned}$$ We use Corollary 4.7 and obtain $$\begin{aligned}&\Vert {\text{ Poi }}_{j}(\lambda )f\Vert _{W^{k}_{p}(\mathbb {R}_+,|{\text{ pr }}_n|^r,\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))}^p \\&\quad = \sum _{l=0}^{k}\int _0^{\infty } \Vert [D_{x_n}^l{\text{ Poi }}_{j}(\lambda )f](\,\cdot \,,x_n) \Vert _{\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})}^px_n^r\,\mathrm {d}x_n\\ {}&\quad \le C \Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}^p\sum _{l=0}^{k}|\lambda |^{\frac{p[-[\widetilde{t}-s]_+-m_j+ l-\zeta ]_+-p\zeta }{2m}}\int _0^{\infty } x_n^{\gamma _1} e^{-cp|\lambda |^{1/2m}x_n}\,\mathrm {d}x_n\\ {}&\quad \le C |\lambda |^{\frac{-\gamma _2}{2m}}\Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}^p\int _0^{\infty } x_n^{\gamma _1} e^{-c|\lambda |^{1/2m}x_n}\,\mathrm {d}x_n\\ {}&\quad \le C|\lambda |^{\frac{-1-\gamma _1-\gamma _2}{2m}} \Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}^p\int _{0}^{\infty } y_n^{\gamma _1} e^{-cy_n}\,dy_n\\ {}&\quad \le C|\lambda |^{\frac{-1-\gamma _1-\gamma _2}{2m}} \Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}^p \end{aligned}$$ for all \(f\in \mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)\). \(\square \) Consider the situation of Corollary 4.7 and let \(p\in [1,\infty )\), \(r\in \mathbb {R}\). Again we write \(\gamma _1=r-p[\widetilde{t}-s]_+\) as well as \(\gamma _2=p\zeta -p[-[\widetilde{t}-s]_+-m_j+ k+\zeta ]_+\). Suppose that \(\gamma _1>-1\) and take \(\varepsilon \in (0,1+\gamma _1)\). Then, for all \(\sigma >0\) and all there is a constant \(C>0\) such that for all \(\varepsilon \ge 0\) $$\begin{aligned} \mathcal {R}\big (\{|\lambda |^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\le C, \end{aligned}$$ where the \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))).\) Let \((\varepsilon _l)_{l\in \mathbb {N}}\) be a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and let \(N\in \mathbb {N}\), \(\lambda _1,\ldots ,\lambda _N\in \Sigma _{\phi }\) and \(f_1,\ldots ,f_N\in \mathscr {A}^s(\mathbb {R}^{n-1},w;E)\). Using Corollary 4.7 and Kahane's contraction principle, we obtain $$\begin{aligned}&\left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{L_p(\Omega ;W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r},\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \eqsim \left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \eqsim \sum _{\widetilde{k}=0}^k\left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}D_{x_n}^{\widetilde{k}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \lesssim _{\sigma }\left\| x_n\mapsto \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1-\varepsilon }{2mp}}x_n^{-[\widetilde{t}-s]_+} e^{-\frac{c}{2}|\lambda _l|^{1/2m}x_n}f_l \right\| _{L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E)))}\\&\quad \lesssim \left( \int _{0}^{\infty } \max _{l=1,\ldots ,N}\{|\lambda _l|^{\frac{1+\gamma _1-\varepsilon }{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\}\,\mathrm{d}x_n\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad \lesssim \left( \int _{0}^{\infty } \max _{l=1,\ldots ,N}\{e^{-p\frac{c}{3}|\lambda _l|^{1/2m}x_n} x_n^{-1+\varepsilon }\}\,\mathrm{d}x_n\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad \le \left( \int _{0}^{\infty } e^{-p\frac{c}{3}\sigma ^{1/2m}x_n} x_n^{-1+\varepsilon }\,\mathrm{d}x_n\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad \le C\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))} \end{aligned}$$ for all \(N\in \mathbb {N}\), all \(\lambda _1,\ldots ,\lambda _N\in \Sigma _{\phi }\) and all \(f_1,\ldots ,f_N\in \mathscr {A}^s(\mathbb {R}^{n-1},w;E)\). This is the desired estimate. \(\square \) Comparing Proposition 4.8 and Proposition 4.9, one might wonder if one can omit the \(\varepsilon \) in Proposition 4.9. After having applied Kahane's contraction principle in the proof of Proposition 4.9, it seems like the \(\varepsilon \) is necessary. Roughly speaking, taking \((\lambda _l)_{l\in \mathbb {N}}\) such that this sequence is dense in \(\Sigma _{\phi }{\setminus }\overline{B(0,\sigma )}\) will cause \(\max _{l=1,\ldots ,N}\{|\lambda _l|^{\frac{\gamma _1+1}{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\}\) to have a singularity of the form \(x_n^{-1}\) at \(x_n=0\) if \(N\rightarrow \infty \). Indeed, taking \(|\lambda _l|^{1/2m}\) close to \(x_n^{-1}\) yields that \(|\lambda _l|^{\frac{\gamma _1+1}{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\) is close to \(x_n^{-1} e^{-pc/2}\). Hence, the integral \(\left( \int _{0}^{\infty } \max _{l=1,\ldots ,N}\{|\lambda _l|^{\frac{\gamma _1+1}{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\}\,\mathrm{d}x_n\right) ^{1/p}\) will tend to \(\infty \) as \(N\rightarrow \infty \). Thus, if one wants to remove the \(\varepsilon \), it seems like one should not apply Kahane's contraction principle as it is applied in the proof of Proposition 4.9. This can for example be avoided under a cotype assumption on E together with a restriction on p, as Proposition 4.11 shows. However, there are some cases in which the \(\varepsilon \) cannot be removed. We will show this in Proposition 4.13. Consider the situation of Corollary 4.7 and let \(r\in \mathbb {R}\). Suppose that E has finite cotype \(q_E\). Suppose that the assumptions of Proposition 2.14 hold true. Again, we define \(\gamma _1=r-p[\widetilde{t}-s]_+\) as well as \(\gamma _2=p\zeta -p[-[\widetilde{t}-s]_+-m_j+ k+\zeta ]_+\). Suppose that \(\gamma _1>-1\). Then, for all \(\sigma >0\) there is a constant \(C>0\) such that $$\begin{aligned} \mathcal {R}\big (\{|\lambda |^{\frac{1+\gamma _1+\gamma _2}{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\le C, \end{aligned}$$ where \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))).\) Let \((\varepsilon _l)_{l\in \mathbb {N}}\) be a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and let \(N\in \mathbb {N}\), \(\lambda _1,\ldots ,\lambda _N\in \Sigma _{\phi }\) and \(f_1,\ldots ,f_N\in \mathscr {A}^s(\mathbb {R}^{n-1},w;E)\). Using Corollary 4.7 and Proposition 2.14, we obtain $$\begin{aligned}&\left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2}{2mp}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{L_p(\Omega ;W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r},\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \eqsim \left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2}{2mp}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \eqsim \sum _{\widetilde{k}=0}^k\left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2}{2mp}}D_{x_n}^{\widetilde{k}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \lesssim _{\sigma }\left\| x_n\mapsto \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1}{2mp}}x_n^{-[\widetilde{t}-s]_+} e^{-\frac{c}{2}|\lambda _l|^{1/2m}x_n}f_l \right\| _{L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E)))}\\&\quad \lesssim \max _{l=1,\ldots ,N}\left( \int _{0}^{\infty } |\lambda _l|^{\frac{1+\gamma _1}{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\,\mathrm{d}x_n\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad =\left( \int _{0}^{\infty } e^{-p\frac{c}{2}y} y^{\gamma _1}\,dy\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad \le C\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))} \end{aligned}$$ Let us now see what can happen if the cotype assumption is not satisfied. Let \(\widetilde{\mathscr {A}}^s\) be defined by $$\begin{aligned} \widetilde{\mathscr {A}}^s:=\{u\in \mathscr {A}^s: {\text {supp}}\mathscr {F}u\subset \overline{B(0,1)} \} \end{aligned}$$ where B(0, 1) denotes the ball with center 0 and radius 1. We endow \(\widetilde{\mathscr {A}}^s\) with the norm \(\Vert \cdot \Vert _{\mathscr {A}^s}\). Then, \(\widetilde{\mathscr {A}}^s\) is a Banach space. Let \((u_n)_{n\in \mathbb {N}}\subset \widetilde{\mathscr {A}}^s\) be a Cauchy sequence. Since \(\mathscr {A}^s\) is a Banach space, we only have to prove that the limit \(u:=\lim _{n\rightarrow \infty }u_n\) satisfies \({\text {supp}}\mathscr {F}u\subset \overline{B(0,1)}\). But since $$\begin{aligned} \mathscr {F}:\mathscr {A}^s\rightarrow \mathscr {S}'(\mathbb {R}^n;E) \end{aligned}$$ is continuous, it follows that $$\begin{aligned}{}[\mathscr {F}u](f)=\lim _{n\rightarrow \infty }[\mathscr {F}u_n](f)=0 \end{aligned}$$ for all \(f\in \mathscr {S}(\mathbb {R}^n)\) such that \({\text {supp}} f\subset \overline{B(0,1)}^c\). This shows the assertion. \(\square \) Let \(\sigma >0\), \(r\in \mathbb {R}\) and \(p\in [1,2)\). For \(\lambda \ge \sigma \) and \(g\in \mathscr {A}^s\) let \(u_\lambda :={\text {Poi}}_{\Delta }(\lambda )g\) be the solution of $$\begin{aligned} \lambda u_{\lambda }(x)-\Delta u_{\lambda }(x)&=0\quad \quad (x\in \mathbb {R}^n_+),\nonumber \\ u_{\lambda }(x',0)&=g(x')\quad (x'\in \mathbb {R}^{n-1}) \end{aligned}$$ which is decaying in normal direction. Then, the set of operators $$\begin{aligned} \{|\lambda |^{\frac{1+r}{2p}}{\text {Poi}}_{\Delta }(\lambda ):\lambda \ge \sigma \}\subset \mathcal {B}(\mathscr {A}^s, L_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)) \end{aligned}$$ is not \(\mathcal {R}\)-bounded. Applying Fourier transform in tangential direction to (4-6), we obtain $$\begin{aligned} \partial _n^2 \hat{u}(\xi ',x_n)&=(\lambda +|\xi '|^2)\hat{u}(\xi ',x_n),\\&\hat{u}(\xi ',0)=\hat{g}(\xi '). \end{aligned}$$ The stable solution of this equation is given by \(e^{-(\lambda +|\xi '|^2)^{1/2} x_n}\hat{g}(\xi ')\) so that the decaying solution of (4-6) is given by $$\begin{aligned} u_{\lambda }(x',x_n)={\text {Poi}}_{\Delta }(\lambda )g= [\mathscr {F}_{x'\rightarrow \xi '}^{-1}e^{-(\lambda +|\xi '|^2)^{1/2} x_n}\mathscr {F}_{x'\rightarrow \xi '}g](x'). \end{aligned}$$ Let \(\chi \subset \mathscr {D}(\mathbb {R}^{n-1})\) be a test function with \(\chi (\xi ')=1\) for \(\xi '\in B(0,1)\) and \({\text {supp}}\chi \subset B(0,2)\). It holds that \(\chi (\xi ')e^{((\lambda +|\xi '|^2)^{1/2}-|\lambda |^{1/2})x_n}\) satisfies the Mikhlin condition uniformly in \(\lambda \ge \sigma \) and \(x_n\le 1\). Hence, we have that $$\begin{aligned} \{{\text {op}}[\chi (\xi ')e^{((\lambda +|\xi '|^2)^{1/2}-|\lambda |^{1/2})x_n}]:\lambda \ge \sigma ,x_n\in [0,1]\}\subset \mathcal {B}(\widetilde{\mathscr {A}^s}) \end{aligned}$$ is \(\mathcal {R}\)-bounded, where \(\widetilde{\mathscr {A}^s}\) is defined as in Lemma 4.12. Using these observations together with the \(\mathcal {R}\)-boundedness of \(\{|\lambda |^{\frac{1+r}{2p}}{\text {Poi}}_{\Delta }(\lambda ):\lambda \ge \sigma \}\), we can carry out the following calculation: Let \((\varepsilon _l)_{l\in \mathbb {N}}\) be a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\), \(\lambda _l=(\sigma 2^l)^2\) \((l\in \mathbb {N})\), \(N\in \mathbb {N}\) and \(g_1,\ldots ,g_N\in \widetilde{\mathscr {A}}^s\). Then, we obtain $$\begin{aligned} \left\| \sum _{l=1}^N \varepsilon _l g_l\right\| _{L_p(\Omega ;\widetilde{\mathscr {A}}^s)}& > rsim \left( \int _{\Omega }\int _{0}^{\sigma ^{-1}} \bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\frac{1+r}{2p}}[{\text {Poi}}_{\Delta }(\lambda _l)g_l](\,\cdot ,x_n)\bigg \Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&= \left( \int _{\Omega }\int _{0}^{\sigma ^{-1}} \bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\frac{1+r}{2p}}{\text {op}}[\chi (\xi ')e^{-(\lambda +|\xi '|^2)^{1/2} x_n}]g_l\bigg \Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\& > rsim \left( \int _{\Omega }\int _{0}^{\sigma ^{-1}} \bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\frac{1+r}{2p}}e^{-|\lambda _l|^{1/2}x_n}g_l\bigg \Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\& > rsim \left( \int _{\Omega }\sum _{m=1}^N\int _{\sigma ^{-1}2^{-m}}^{\sigma ^{-1}2^{-m+1}} \bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\frac{1+r}{2p}}e^{-|\lambda _l|^{1/2}x_n}g_l\bigg \Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\& > rsim \left( \int _{\Omega }\sum _{m=1}^N\int _{\sigma ^{-1}2^{-m}}^{\sigma ^{-1}2^{-m+1}} \lambda _m^{\frac{1+r}{2}}e^{-p|\lambda _m|^{1/2}x_n}\Vert g_m\Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\& > rsim \left( \sum _{m=1}^N\Vert g_m\Vert ^p_{\mathscr {A}^s} \right) ^{1/p}. \end{aligned}$$ This shows that \(\widetilde{\mathscr {A}}^s\) has cotype p. However, \(\widetilde{\mathscr {A}}^s\) is a nontrivial Banach space by Lemma 4.12 and therefore its cotype must satisfy \(p\ge 2\). This contradicts \(p\in [1,2)\) and hence, \(\{|\lambda |^{\frac{1+r}{2p}}{\text{ Poi }}_{\Delta }(\lambda ):\lambda \ge \sigma \}\) cannot be \(\mathcal {R}\)-bounded. \(\square \) Proposition 4.13 shows that it is not possible in general to remove the \(\varepsilon \) in Proposition 4.9. Even though we only treat the Laplacian with Dirichlet boundary conditions in Proposition 4.13, it seems like the integrability parameter in normal direction may not be smaller than the cotype of the space in tangential directions in order to obtain the sharp estimate of Proposition 4.11. Depending on what one aims for, it can also be better to substitute \(t=\widetilde{t}+m_j-k-\zeta \) in Proposition 4.8, Proposition 4.9 or Proposition 4.11. In this case, we obtain the estimates $$\begin{aligned} \Vert {\text {Poi}}_{j}(\lambda )\Vert&\le C|\lambda |^{\frac{-1-\gamma _1-\gamma _2}{2mp}},\quad (\text {Proposition } 4.8),\\ \mathcal {R}\big (\{|\lambda |^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )&\le C,\quad (\text {Proposition } 4.9),\\ \mathcal {R}\big (\{|\lambda |^{\frac{1+\gamma _1+\gamma _2}{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )&\le C,\quad (\text {Proposition } 4.11), \end{aligned}$$ $$\begin{aligned}&\gamma _1=r-p[t+k+\zeta -m_j-s]_+,\\&\quad \gamma _2=p\zeta -p[-[t+k+\zeta -m_j-s]_+-m_j+k+\zeta ]_+, \end{aligned}$$ and where the operator norms and the \(\mathcal {R}\)-bounds are taken in $$\begin{aligned} \mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}((\varepsilon ,\infty ),|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m}))). \end{aligned}$$ If we now choose \(\zeta :=[m_j+s-k-t]_+\), then we obtain $$\begin{aligned} \gamma _1=r-p[t+k-m_j-s]_+,\quad \gamma _2=p[m_j+s-k-t]_+-p[s-t]_+. \end{aligned}$$ From this, it follows that $$\begin{aligned} -\gamma _1-\gamma _2=-r+p(k-m_j)+p([s-t]_++t-s)=-r+p(k-m_j)+p[t-s]_+ \end{aligned}$$ This yields the following result: Recall Assumptions 1.1 and 1.2. Let \(k\in \mathbb {N}_0\), \(r,s,t\in \mathbb {R}\) and \(p\in [1,\infty )\). Suppose that \(r-p[t+k-m_j-s]_+>-1\). For all \(\sigma >0\), there is a constant \(C>0\) such that $$\begin{aligned} \Vert {\text {Poi}}_{j}(\lambda )\Vert \le C|\lambda |^{\frac{-1-r+p(k-m_j)+p[t-s]_+}{2mp}} \end{aligned}$$ for all \(\lambda \in \Sigma _{\phi }\) such that \(|\lambda |\ge \sigma \) where the operator norms are taken in the space \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\). Let \(\varepsilon \in (0,\gamma _1+1)\). Then, for all \(\sigma >0\) there is a constant \(C>0\) such that $$\begin{aligned} \mathcal {R}\big (\{|\lambda |^{\frac{1+r-\varepsilon -p(k-m_j)-p[t-s]_+}{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\le C \end{aligned}$$ where the \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+, |{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\). Suppose that the assumptions of Proposition 2.14 hold true. Then, for all \(\sigma >0\) there is a constant \(C>0\) such that $$\begin{aligned} \mathcal {R}\big (\{|\lambda |^{\frac{1+r-p(k-m_j)-p[t-s]_+}{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\le C \end{aligned}$$ where the \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\). This follows from Proposition 4.8, Proposition 4.9 and Proposition 4.11 together with the observations in Remark 4.15. \(\square \) Corollary 4.17 Let \(k\in \mathbb {Z}\), \(s,t\in \mathbb {R}\), \(p\in (1,\infty )\) and \(r\in (-1,p-1)\). Suppose that \(r-p[t+k-m_j-s]_+>-1\) and that \(\mathscr {A}^s\) is reflexive. The case \(k\in \mathbb {N}_0\) is already contained in Theorem 4.16. Hence, we only treat the case \(k<0\). In this case, it holds that $$\begin{aligned} (r-pk)-p[t-m_j-s]_+\ge r-p[t+k-m_j-s]_+>-1. \end{aligned}$$ Hence, Theorem 4.16 holds with a weight of the power \(r-pk\) and smoothness 0 in normal direction. Combining this with Lemma 2.22 yields the assertion. \(\square \) Let \(s,t\in \mathbb {R}\), \(k\in (0,\infty ){\setminus }\mathbb {N}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(q\in [1,\infty ]\). We write \(k=\overline{k}-\theta \) with \(\overline{k}\in \mathbb {N}_0\) and \(\theta \in [0,1)\). Suppose that \(r-p[t+\overline{k}-m_j-s]_+>-1\). For all \(\sigma >0\) there is a constant \(C>0\) such that for all \(\lambda \in \Sigma _{\phi }\) with \(|\lambda |\ge \sigma \) we have the estimate where the norm is taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),H^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\) or in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),B^{k}_{p,q}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m}))).\) Let \(\varepsilon \in (0,\gamma _1+1)\) and let E be a UMD space. Then, for all \(\sigma >0\) there is a constant \(C>0\) such that where the \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),H^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\). Suppose that the assumptions of Proposition 2.14 hold true and let E be a UMD space. Then, for all \(\sigma >0\) there is a constant \(C>0\) such that This follows from Theorem 4.16 together with real and complex interpolation, see [37, Proposition 6.1, (6.4)] together with a retraction–coretraction argument, [31, Proposition 5.6] and Proposition 2.3. Note that the power weight \(|{\text {pr}}_n|^{r}\) is an \(A_p\) weight, since \(r\in (-1,p-1)\), see [18, Example 9.1.7]. \(\square \) Let \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(w_r(x):=x^r\) for \(x\in \mathbb {R}_+\). Then, the linear operator $$\begin{aligned} T:L_p(\mathbb {R}_+,w_r;\mathbb {R})\rightarrow L_p(\mathbb {R}_+,w_r;\mathbb {R}),\;f\mapsto \int _{0}^{\infty } \frac{f(y)}{x+y}\,dy \end{aligned}$$ is bounded. In [17, Appendix I.3] this was shown for \(r=0\) using Schur's Lemma. We adjust the same proof to the weighted setting. Let \(K(x,y):=\frac{1}{y^r(x+y)}\). Then, we may write $$\begin{aligned} (Tf)(x)=\int _0^\infty K(x,y)f(y) y^r\,dy. \end{aligned}$$ We further define the transpose operator $$\begin{aligned} (T^tf)(y)=\int _0^\infty K(x,y)f(x) x^r\,\mathrm{d}x=\frac{1}{y^r}\int _0^\infty \frac{f(x)}{x+y}x^r\,\mathrm{d}x. \end{aligned}$$ By the lemma in [17, Appendix I.2], it is sufficient to find \(C>0\) and \(u,v:\mathbb {R}_+\rightarrow (0,\infty )\) such that $$\begin{aligned} T(u^{p'})\le Cv^{p'}\quad \text {and}\quad T^t(v^{p})\le Cu^{p}, \end{aligned}$$ where \(1=\frac{1}{p}+\frac{1}{p'}\). Similar to [17, Appendix I.3], we choose $$\begin{aligned} u(x):=v(x):=x^{-\frac{1+r}{pp'}} \end{aligned}$$ $$\begin{aligned} C:=\max \left\{ \int _0^\infty \frac{t^{-\frac{1+r}{p}}}{1+t}\,dt,\int _0^\infty \frac{t^{r-\frac{1+r}{p'}}}{1+t}\,dt \right\} . \end{aligned}$$ Note that \(r\in (-1,p-1)\) ensures that both integrals are finite since $$\begin{aligned} -\frac{1+r}{p}\in (-1,0)\Longleftrightarrow r\in (-1,p-1) \end{aligned}$$ $$\begin{aligned} r-\frac{1+r}{p'}\in (-1,0)\Longleftrightarrow r\in (-1,p-1). \end{aligned}$$ With this choice, we obtain $$\begin{aligned} (T u^{p'})(x)=\int _0^\infty \frac{y^{-\frac{1+r}{p}}}{x+y}\,dy=x^{-\frac{1+r}{p}}\int _0^\infty \frac{t^{-\frac{1+r}{p}}}{1+t}\,dt\le C v(x)^{p'} \end{aligned}$$ $$\begin{aligned} (T^t v^{p})(y)=\frac{1}{y^r}\int _0^\infty \frac{x^{r-\frac{1+r}{p'}}}{x+y}\,\mathrm{d}x=y^{-\frac{1+r}{p'}}\int _0^\infty \frac{t^{r-\frac{1+r}{p'}}}{1+t}\,dt\le Cu(y)^p. \end{aligned}$$ This shows the assertion. \(\square \) From now on, we use the notation $$\begin{aligned} D^{k,\widetilde{k},s}_{r}(I):= H_p^{k}(I,|{\text {pr}}_n|^r,\mathscr {A}^{s+\widetilde{k}})&\cap H^{k+\widetilde{k}}_p(I,|{\text {pr}}_n|^r,\mathscr {A}^{s}),\nonumber \\ D^{k,2m,s}_{r,B}(I):= \{u\in H_p^{k}(I,|{\text {pr}}_n|^r,\mathscr {A}^{s+2m})&\cap H^{k+2m}_p(I,|{\text {pr}}_n|^r,\mathscr {A}^{s}):\nonumber \\&{\text {tr}}_{x_n=0} B_j(D)u=0\text { for all }j=1,\ldots ,m\} \end{aligned}$$ for \(p\in (1,\infty )\) \(k,\widetilde{k}\in [0,\infty )\), \(s\in \mathbb {R}\), \(r\in (-1,p-1)\) and \(I\in \{\mathbb {R}_+,\mathbb {R}\}\). Moreover, we endow both spaces with the norm $$\begin{aligned} \Vert u\Vert _{D^{k,\widetilde{k},s}_{r}(I)}&=\max \{\Vert u\Vert _{H_p^{k}(I,|{\text {pr}}_n|^r,\mathscr {A}^{s+\widetilde{k}})}, \Vert u\Vert _{H^{k+\widetilde{k}}_p(I,|{\text {pr}}_n|^r,\mathscr {A}^{s})} \},\\ \Vert u\Vert _{D^{k,2m,s}_{r,B}(I)}&=\max \{\Vert u\Vert _{H_p^{k}(I,|{\text {pr}}_n|^r,\mathscr {A}^{s+2m})}, \Vert u\Vert _{H^{k+2m}_p(I,|{\text {pr}}_n|^r,\mathscr {A}^{s})} \}, \end{aligned}$$ respectively, so that \((D^{k,\widetilde{k},s}_{r}(I),\Vert \cdot \Vert _{D^{k,\widetilde{k},s}_{r}(I)})\) and \((D^{k,2m,s}_{r,B}(I),\Vert \cdot \Vert _{D^{k,2m,s}_{r,B}(I)})\) are a Banach spaces. Let \(s\in \mathbb {R}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(k\in \mathbb {N}_0\) such that \(k\le \min \{\beta _n:\beta \in \mathbb {N}_0^n,|\beta |=m_j,b_\beta ^j\ne 0\}\). Let further \(u\in D^{k,2m,s}_{r}(\mathbb {R}_+)\) and \(\theta \in [0,1]\) such that \(2m\theta \in \mathbb {N}_0\). Then, for all \(\sigma >0\) there is a constant \(C>0\) such that we have the estimate $$\begin{aligned} \mathcal {R}(\{\lambda ^{\theta }{\text {Poi}}_j(\lambda ){\text {tr}}_{x_n}B_j(D):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \})\le C \end{aligned}$$ where the \(\mathcal {R}\)-bound is taken in $$\begin{aligned} \mathcal {B}(D^{k,2m,s}_{r}(\mathbb {R}_+),D^{k,2m(1-\theta ),s}_{r}(\mathbb {R}_+)). \end{aligned}$$ The proof uses an approach which is sometimes referred to as Volevich-trick. This approached is already standard in the treatment of parameter-elliptic and parabolic boundary value problems in classical Sobolev spaces, see for example Lemma 7.1 in [8] and how it is used to obtain the results therein. The idea is to use the fundamental theorem of calculus in normal directions and to apply the boundedness of the operator from Lemma 4.19. Using these ideas in connection with Corollary 4.7, we can carry out the following computation: Let \((\varepsilon _n)_{n\in \mathbb {N}}\) be a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\), \(N\in \mathbb {N}\), \(\lambda _1,\ldots ,\lambda _N\) and \(u_1,\ldots ,u_N\in H^{k}(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m})\cap H^{k+2m}(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s})\). Then, we obtain $$\begin{aligned}&\left\| \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }{\text {Poi}}_j{\text {tr}}_{x_n=0}B_j(D) u_l \right\| _{L_p(\Omega ;D^{k,2m(1-\theta ),s}_{r}(\mathbb {R}_+))}\\&\quad \lesssim \sum _{\widetilde{k}=0}^k\left\| \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }D_{x_n}^{\widetilde{k}}{\text {Poi}}_j{\text {tr}}_{x_n=0}B_j(D) u_l \right\| _{L_p(\Omega ;L_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m(1-\theta )})}\\&\qquad + \sum _{\widetilde{k}=0}^{k+(1-\theta )2m}\left\| \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }D_{x_n}^{\widetilde{k}}{\text {Poi}}_j{\text {tr}}_{x_n=0}B_j(D) u_l \right\| _{L_p(\Omega ;L_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s})}\\&\quad \le \sum _{\widetilde{k}=0}^k\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[\partial _{y_n}D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s+2m(1-\theta )}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\qquad +\sum _{\widetilde{k}=0}^k\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][\partial _{y_n}B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s+2m(1-\theta )}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\qquad +\sum _{\widetilde{k}=0}^{k+(1-\theta )2m}\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[\partial _{y_n}D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\qquad + \sum _{\widetilde{k}=0}^{k+(1-\theta )2m}\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][\partial _{y_n}B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}. \end{aligned}$$ In order to keep the notation shorter, we continue the computation with just the first of the four terms. The steps we would have to carry out for the other three terms, are almost exactly the same with just minor changes on the parameters. We obtain $$\begin{aligned}&\sum _{\widetilde{k}=0}^k\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[\partial _{y_n}D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s+2m(1-\theta )}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\quad \le \sum _{\widetilde{k}=0}^k\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg ( \int _{\mathbb {R}_+}\bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[\partial _{y_n}D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][B_j(D) u_l](\,\cdot \,,y_n)\bigg \Vert _{\mathscr {A}^{s+2m(1-\theta )}}\,dy_n\bigg )^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\quad \lesssim \left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg ( \int _{\mathbb {R}_+}\bigg \Vert \sum _{l=1}^N \varepsilon _l\frac{1}{x_n+y_n}[B_j(D) u_l](\,\cdot \,,y_n)\bigg \Vert _{\mathscr {A}^{s+k+2m-m_j}}\,dy_n\bigg )^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\quad \lesssim \left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \sum _{l=1}^N \varepsilon _l[B_j(D) u_l](\,\cdot \,,x_n)\bigg \Vert _{\mathscr {A}^{s+k+2m-m_j}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\quad \le \sum _{|\beta |=m_j}\bigg \Vert b_{\beta }^j\partial _n^{\beta _n}\partial _{x'}^{\beta '}\sum _{l=1}^N \varepsilon _l u_l\bigg \Vert _{L_p(\Omega ;L_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+k+2m-m_j}))}\\&\quad \lesssim \sum _{\beta _n=k}^{m_j}\bigg \Vert \sum _{l=1}^N \varepsilon _l u_l\bigg \Vert _{L_p(\Omega ;H^{\beta _n}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+k+2m-\beta _n}))}. \end{aligned}$$ From the second to the third line, we used Corollary 4.7, from the third to the fourth line we used Lemma 4.19 and in the last step we used that \(k\le \min \{\beta _n:\beta \in \mathbb {N}_0^n,|\beta |=m_j,b_\beta ^j\ne 0\}\). The other three terms above can either also be estimated by $$\begin{aligned} \sum _{\beta _n=k}^{m_j}\bigg \Vert \sum _{l=1}^N \varepsilon _l u_l\bigg \Vert _{L_p(\Omega ;H^{\beta _n}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+k+2m-\beta _n}))} \end{aligned}$$ $$\begin{aligned} \sum _{\beta _n=k}^{m_j}\bigg \Vert \sum _{l=1}^N \varepsilon _l u_l\bigg \Vert _{L_p(\Omega ;H^{\beta _n+1}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+k+2m-\beta _n-1}))} \end{aligned}$$ if the derivative \(\partial _{y_n}\) is taken of \(g_j\) instead of \({\text {Poi}}_j\). Since \(m_j< 2m\), we obtain the estimate $$\begin{aligned} \left\| \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }{\text {Poi}}_j{\text {tr}}_{x_n=0}B_j(D) u_l \right\| _{L_p(\Omega ;D^{k,2m(1-\theta ),s}_{r}(\mathbb {R}_+))}\lesssim \left\| \sum _{l=1}^N \varepsilon _l u_l \right\| _{L_p(\Omega ;D^{k,2m,s}_{r}(\mathbb {R}_+))}. \end{aligned}$$ Resolvent estimates Now we study the resolvent problem, i.e., (1-1) with \(g_j=0\). We show that the corresponding operator is \(\mathcal {R}\)-sectorial and thus has the property of maximal regularity in the UMD case. But first, we prove the \(\mathcal {R}\)-sectoriality in \(\mathbb {R}^n\). Let \(k,s\in \mathbb {R}\). Suppose that E satisfies Pisier's property \((\alpha )\) if one of the scales \(\mathscr {A}, \mathscr {B}\) belongs to the Bessel potential scale. Then, for all \(\sigma >0\) the realization of \(A(D)-\sigma \) in \(\mathscr {B}^k(\mathscr {A}^s)\) given by $$\begin{aligned} A(D)-\sigma :\mathscr {B}^k(\mathscr {A}^s) \supset \mathscr {B}^{k+2m}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s+2m})\rightarrow \mathscr {B}^k(\mathscr {A}^s),\, u\mapsto A(D)u-\sigma u \end{aligned}$$ is \(\mathcal {R}\)-sectorial in \(\Sigma _{\phi }\) and there is a constant \(C>0\) such that the estimate $$\begin{aligned} \Vert u\Vert _{\mathscr {B}^{k+2m}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s+2m})}\le C\Vert (\lambda +\sigma -A(D)u\Vert _{\mathscr {B}^k(\mathscr {A}^s)} \end{aligned}$$ holds for all \(\lambda \in \Sigma _{\phi }\). It was shown in [22, Lemma 5.10] that $$\begin{aligned}&\mathcal {R}(\{\langle \xi \rangle ^{|\alpha |}D^{\alpha }_{\xi } (s_1+s_2\lambda +s_3|\xi |^{2m})(\lambda +1-A(\xi ))^{-1}\nonumber \\&:\lambda \in \Sigma _{\phi },\,\xi \in \mathbb {R}^n\})<\infty \quad \text {in }\mathcal {B}(E) \end{aligned}$$ holds for all \(\alpha \in \mathcal {B}(E)\) and all \((s_1,s_2,s_3)\in \mathbb {R}^3\). Note that the authors of [22] use a different convention concerning the sign of A. Taking \((s_1,s_2,s_3)=(0,1,0)\) together with the iterated \(\mathcal {R}\)-bounded versions of Mikhlin's theorem, Proposition 3.8, show that $$\begin{aligned} \mathcal {R}(\{\lambda (\lambda +1-A(D))^{-1}:\lambda \in \Sigma _{\phi }\})<\infty . \end{aligned}$$ Thus, it only remains to prove that \(\mathscr {B}^{k+2m}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s+2m})\) is the right domain and that (5-1) holds. But (5-2) with \((s_1,s_2,s_3)=(1,0,1)\) shows that $$\begin{aligned}{}[\xi \mapsto (1+|\xi |^{2m})(\lambda +1-A(\xi ))^{-1}]\in S^0_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E)) \end{aligned}$$ $$\begin{aligned}{}[\xi \mapsto (\lambda +1-A(\xi ))^{-1}]\in S^{-2m}_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E)) \end{aligned}$$ uniformly in \(\lambda \in \Sigma _{\phi }\). Now the assertion follows from Proposition 3.11. \(\square \) If both \(\mathscr {A}\) and \(\mathscr {B}\) belong to the Bessel potential scale, then Theorem 5.1 can be improved in the following way: Lemma 3.9 together with Fubini's theorem yields that $$\begin{aligned} \langle D' \rangle ^{-s}\langle D_n\rangle ^{-k} L_p(\mathbb {R}^n_x,w_0\otimes w_1;E){\mathop {\rightarrow }\limits ^{\eqsim }} H^k_p(\mathbb {R}_{x_n},w_1;H^{s}_p(\mathbb {R}^{n-1}_{x'},w_0,E)). \end{aligned}$$ Moreover, we have $$\begin{aligned}&\langle D' \rangle ^{-s}\langle D_n\rangle ^{-k} H^{2m}_p(\mathbb {R}^n_x,w_0\otimes w_1;E)\\&\quad =H^{k+2m}_p(\mathbb {R}_{x_n},w_1;H^{s}_p(\mathbb {R}^{n-1}_{x'},w_0,E))\cap H^{k}_p(\mathbb {R}_{x_n},w_1;H^{s+2m}_p(\mathbb {R}^{n-1}_{x'},w_0,E)). \end{aligned}$$ But it is well known that the realization of A(D) even admits a bounded \(\mathcal {H}^{\infty }\)-calculus in \( L_p(\mathbb {R}^n_x,w_0\otimes w_1;E)\) with domain \(H^{2m}_p(\mathbb {R}^n_x,w_0\otimes w_1;E)\) no matter whether Pisier's property \((\alpha )\) is satisfied or not (recall that the weights in the Bessel potential case are in \(A_p\)). This can be derived by using the weighted versions of Mikhlin's theorem in the proof of [8, Theorem 5.5]. Since \(\langle D' \rangle ^{-s}\langle D_n\rangle ^{-k}\) is an isomorphism, A(D) also admits a bounded \(\mathcal {H}^{\infty }\)-calculus in \(H^k_p(\mathbb {R}_{x_n},w_1;H^{s}_p(\mathbb {R}^{n-1}_{x'},w_0,E))\) with domain $$\begin{aligned} H^{k+2m}_p(\mathbb {R}_{x_n},w_1;H^{s}_p(\mathbb {R}^{n-1}_{x'},w_0,E))\cap H^{k}_p(\mathbb {R}_{x_n},w_1;H^{s+2m}_p(\mathbb {R}^{n-1}_{x'},w_0,E)), \end{aligned}$$ even if Pisier's property \((\alpha )\) is not satisfied. In the proof of Theorem 5.1, one can also use Proposition 3.7 instead of Proposition 3.8 if one only needs sectoriality. In this case, we can again drop the assumption that E has to satisfy Pisier's property \((\alpha )\). For the \(\mathcal {R}\)-sectoriality of the boundary value problem, which we are going to derive in Theorem 5.4, we have a restriction on the regularity in normal direction. It may not be larger than \(k_{max}\in \mathbb {N}_0\) which we define by $$\begin{aligned} k_{\max }:=\min \{\beta _n|\,\exists j\in \{1,\ldots ,m\}\exists \beta \in \mathbb {N}_0^n,|\beta |=m_j: b^j_{\beta }\ne 0\}, \end{aligned}$$ i.e., \(k_{\max }\) is the minimal order in normal direction of all nonzero differential operators which appear in any of the boundary operators $$\begin{aligned} B_j(D)=\sum _{|\beta |=m_j} b^j_{\beta }D^{\beta }\quad (j=1,\ldots ,m). \end{aligned}$$ Therefore, if there is a nonzero term with no normal derivatives in one of the \(B_1,\ldots , B_n\), then \(k_{\max }=0\). In particular, it holds that \(k_{\max }=0\) if one of the operators \(B_1,\ldots , B_n\) corresponds to the Dirichlet trace at the boundary. This includes the case of the Dirichlet Laplacian. On the other hand, for the Neumann Laplacian we have \(k_{\max }=1\). In this sense, our results will be analogous to the usual isotropic case: We will be able to derive \(\mathcal {R}\)-sectoriality of the Neumann Laplacian in \(L_{p}(\mathbb {R}_+,;\mathscr {A}^s)\) and \(H_{p}^1(\mathbb {R}_+,;\mathscr {A}^s)\), but for the Dirichlet Laplacian we can only derive it in \(L_{p}(\mathbb {R}_+,;\mathscr {A}^s)\). Recall Assumptions 1.1 and 1.2. Suppose that E satisfies Pisier's property \((\alpha )\). Let \(k\in [0,k_{\max }]\cap \mathbb {N}_0\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(s\in \mathbb {R}\). We define the operator $$\begin{aligned} A_B:H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\supset D(A_B) \rightarrow H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s),\,u\mapsto A(D)u \end{aligned}$$ on the domain $$\begin{aligned} D(A_B):=\{ u\in H_{p}^{k}(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s):&A_B u\in H_{p}^{k}(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\\&{\text {tr}}_{x_n=0} B_j(D)u=0\text { for all }j=1,\ldots ,m\} \end{aligned}$$ Then, for all \(\sigma >0\) we have that \(A_B-\sigma \) is \(\mathcal {R}\)-sectorial in \(\Sigma _{\phi }\). Moreover, there is a constant C such that for all \(\lambda \in \Sigma _\phi \) with \(|\lambda |\ge \sigma \) we have the estimate $$\begin{aligned} \Vert u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)} \le C \Vert (\lambda -A(D))u\Vert _{H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}. \end{aligned}$$ In particular, it holds that \(D^{k,2m,s}_{r,B}(\mathbb {R}_+)=D(A_B)\). $$\begin{aligned} R(\lambda )f=r_+(\lambda -A(D))_{\mathbb {R}^n}^{-1} \mathscr {E} f - \sum _{j=1}^m{\text {pr}}_{1}{\text {Poi}}_j(\lambda ){\text {tr}}_{x_n=0}B_j(D)(\lambda -A(D))_{\mathbb {R}^n}^{-1} \mathscr {E} f, \end{aligned}$$ where \(\lambda \in \Sigma _{\phi }\), \(f\in H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\), \((\lambda -A(D))_{\mathbb {R}^n}^{-1}\) denotes the resolvent on \(\mathbb {R}^n\) as in Theorem 5.1, \(r_+\) denotes the restriction of a distribution on \(\mathbb {R}^n\) to \(\mathbb {R}^n_+\) and \(\mathscr {E}\) denotes an extension operator mapping \(H_{p}^t(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\) into \(H_{p}^t(\mathbb {R},|{\text {pr}}_n|^r;\mathscr {A}^s)\) for arbitrary \(t\in \mathbb {R}\). \(\mathscr {E}\) can for example be chosen to be Seeley's extension, see [43]. Combining Proposition 4.20 with \(\theta =1\) and Theorem 5.1 yields that the set $$\begin{aligned} \{\lambda R(\lambda ):\lambda \in \Sigma _{\phi },|\lambda |\ge \sigma \}\subset \mathcal {B}(H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)) \end{aligned}$$ is \(\mathcal {R}\)-bounded. Next, we show that \(R(\lambda )\) is indeed the resolvent so that we obtain \(\mathcal {R}\)-sectoriality. To this end, we show that $$\begin{aligned} R(\lambda ):H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s) \rightarrow D(A_B) \end{aligned}$$ is a bijection with inverse \(\lambda -A_B\). Let \(f\in H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\). Since $$\begin{aligned} {\text {tr}}_{x_n=0}B_k(D){\text {pr}}_1{\text {Poi}}_j(\lambda )=\delta _{k,j}{\text {id}}_{\mathscr {A}} \end{aligned}$$ by construction, it follows from applying \(B_j(D)\) to (5-4) that \({\text {tr}}_{x_n=0}B_j(D)R(\lambda )f=0\) for all \(j=1,\ldots ,m\). Moreover, we have \((\lambda -A(D)){\text {pr}}_1{\text {Poi}}_j(\lambda )=0\) by the definition of \({\text {Poi}}_j(\lambda )\). This shows that $$\begin{aligned} (\lambda -A(D))R(\lambda )={\text {id}}_{H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)} \end{aligned}$$ and therefore $$\begin{aligned} A(D)R(\lambda )f = \lambda R(\lambda )f - (\lambda -A(D))R(\lambda )f=\lambda R(\lambda )f-f. \end{aligned}$$ But it is already contained in (5-5) that \( \lambda R(\lambda )f\in H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\). This shows that \(R(\lambda )\) maps \(H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\) into \(D(A_B)\). In addition, (5-6) shows the injectivity of \(R(\lambda )\). But also $$\begin{aligned} (\lambda -A(D)):D(A_B)\rightarrow H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s) \end{aligned}$$ is injective as a consequence of the Lopatinskii–Shapiro condition. Hence, there is a mapping $$\begin{aligned} T(\lambda ):H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\rightarrow D(A_B) \end{aligned}$$ such that \(T(\lambda )(\lambda -A(D))={\text {id}}_{D(A_B)}\). But from this, we obtain $$\begin{aligned} T(\lambda )=T(\lambda )(\lambda -A_B)R(\lambda )=R(\lambda ) \end{aligned}$$ $$\begin{aligned} R(\lambda )(\lambda -A_B)={\text {id}}_{D(A_B)},\quad (\lambda -A_B)R(\lambda )={\text {id}}_{H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}, \end{aligned}$$ i.e., \(R(\lambda )=(\lambda -A_B)^{-1}\) is indeed the resolvent and we obtain the \(\mathcal {R}\)-sectoriality. It remains to show that the estimate (5-3) holds. To this end, we can again use the formula for the resolvent (5-4) in connection with Proposition 4.20 (\(\theta =0\)) and Theorem 5.1. Then, we obtain for \(u\in D(A_B)\) that $$\begin{aligned} \Vert u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)}&\le \Vert r_+ (\lambda -A(D))_{\mathbb {R}^n}^{-1}\mathscr {E}(\lambda -A_B)u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)} \\&\quad +\sum _{j=1}^m\Vert {\text {pr}}_{1}{\text {Poi}}_j(\lambda ){\text {tr}}_{x_n=0}B_j(D)\\&\quad (\lambda -A(D))_{\mathbb {R}^n}^{-1} \mathscr {E} (\lambda -A_B)u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)}\\&\lesssim \Vert (\lambda -A_B)u\Vert _{H^k(\mathbb {R}_+,|{\text {pr}}_n|^r\mathscr {A}^s)}\\&\quad +\sum _{j=1}^m\Vert r_+ (\lambda -A(D))_{\mathbb {R}^n}^{-1} \mathscr {E} (\lambda -A_B)u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)}\\&\lesssim \Vert (\lambda -A_B)u\Vert _{H^k(\mathbb {R}_+,|{\text {pr}}_n|^r\mathscr {A}^s)}. \end{aligned}$$ This also implies that \(D(A_B)=D^{k,2m,s}_{r,B}(\mathbb {R}_+)\). Indeed, it follows from Proposition 3.11 that $$\begin{aligned}&\Vert (\lambda -A_B)u\Vert _{H^k(\mathbb {R}_+,|{\text {pr}}_n|^r\mathscr {A}^s)}\lesssim \Vert (\lambda -A(D))\mathscr {E}u\Vert _{H^k(\mathbb {R},|{\text {pr}}_n|^r\mathscr {A}^s)}\\&\lesssim \Vert \mathscr {E}u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R})}\lesssim \Vert u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)} \end{aligned}$$ for \(u\in D^{k,2m,s}_{r,B}(\mathbb {R}_+)\). Hence, we have $$\begin{aligned} D^{k,2m,s}_{r,B}(\mathbb {R}_+)\hookrightarrow D(A_B) \hookrightarrow D^{k,2m,s}_{r,B}(\mathbb {R}_+). \end{aligned}$$ If E is a UMD space, then the results of Theorem 5.4 also hold for \(k\in [0,k_{\max }]\), i.e., k does not have to be an integer. This follows from complex interpolation, see Proposition 2.3 and [31, Proposition 5.6]. Note that unlike in Proposition 2.3 we can not replace the UMD space E by a K-convex Banach space, since the UMD property is needed for the complex interpolation of Bessel potential spaces in [31, Proposition 5.6]. Moreover, in Assumption 1.2 we require E to be a UMD space if one of the spaces in tangential or normal direction belongs to the Bessel potential scale. Two canonical applications of Theorem 5.4 are Dirichlet and Neumann Laplacian. Let \(E=\mathbb {C}\), \(p\in (1,\infty )\), \(r\in (1,p-1)\) and \(s\in \mathbb {R}\). We consider the Laplacian with Dirichlet boundary conditions $$\begin{aligned} \Delta _D:L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s)\supset D(\Delta _D)\rightarrow L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s) \end{aligned}$$ on the domain \(D(\Delta _D)\) given by $$\begin{aligned} D(\Delta _D):=\{u\in H^{2}_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s)\cap L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{s+2m}): {\text {tr}}_{x_n=0}u=0\}. \end{aligned}$$ For all \(\sigma >0\), it holds that \(\Delta _D-\sigma \) is \(\mathcal {R}\)-sectorial in any sector \(\Sigma _{\psi }\) with \(\psi \in (0,\pi )\). Let \(k\in \{0,1\}\). We consider the Laplacian with Neumann boundary conditions $$\begin{aligned} \Delta _N:H^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s)\supset D(\Delta _D)\rightarrow H^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s) \end{aligned}$$ on the domain \(D(\Delta _N)\) given by $$\begin{aligned} D(\Delta _N):=\{u\in H^{k+2}_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s)\cap H^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{s+2m}): {\text {tr}}_{x_n=0}\partial _n u=0\}. \end{aligned}$$ For all \(\sigma >0\), it holds that \(\Delta _N-\sigma \) is \(\mathcal {R}\)-sectorial in any sector \(\Sigma _{\psi }\) with \(\psi \in (0,\pi )\). Both statements follow directly from Theorem 5.4. \(\square \) Application to boundary value problems Let \(s_1,\ldots ,s_m\in \mathbb {R}\) and \(g_j\in \mathscr {A}^{s_j}\) \((j=1,\ldots ,m)\). Then, the equation $$\begin{aligned} \lambda u-A(D)u&=0\quad \;\text {in }\mathbb {R}^n_+,\\ B_j(D)u&=g_j\quad \text {on }\mathbb {R}^{n-1} \end{aligned}$$ has a unique solution \(u\in \mathscr {S}'(\mathbb {R}^n_+;E)\) for all \(\lambda \in \Sigma _{\phi }\). This solution satisfies $$\begin{aligned} u\in \sum _{j=1}^m\bigcap _{r,t\in \mathbb {R},\,k\in \mathbb {N}_0,\,p\in [1,\infty )\atop r-p[t+k-m_j-s_j]_+>-1} W_p^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^t). \end{aligned}$$ Moreover, for all \(\sigma >0\), \(t,r\in \mathbb {R}\), \(p\in [1,\infty )\) and \(k\in \mathbb {N}_0\) such that \(r-p[t+k-m_j-s_j]_+>-1\) for all \(j=1,\ldots ,m\) there is a constant \(C>0\) such that $$\begin{aligned} \Vert u\Vert _{W_p^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^t)}\le C\sum _{j=1}^m |\lambda |^{\frac{-1-r+p(k-m_j)+p[t-s_j]_+}{2mp}}\Vert g_j\Vert _{\mathscr {A}^{s_j}} \end{aligned}$$ for all \(\lambda \in \Sigma _{\phi }\) with \(|\lambda |\ge \sigma \) All the assertions follow directly from Theorem 4.16 (a). \(\square \) Note that the smoothness parameters k and t of the solution in Theorem 6.1 can be chosen arbitrarily large if one accepts a strong singularity at the boundary. On the other hand, if t is chosen small enough, then the singularity can be removed. In Theorem 6.1, we can take \(k=1+\max _{j=1,\ldots ,m}m_j\), \(r=0\) and t such that \(r-p[t+k-m_j-s_j]_+>-1\) for all \(j=1,\ldots ,m\). This means that the boundary conditions \(B_j(D)u=g_j\) can be understood in a classical sense. Indeed, [37, Proposition 7.4] in connection with [37, Proposition 3.12] shows that $$\begin{aligned} W^k_p(\mathbb {R}_+;\mathscr {A}^t)\hookrightarrow BUC^{k-1}(\mathbb {R}_+;\mathscr {A}^t). \end{aligned}$$ Hence, \({\text {tr}}_{x_n=0}B_j(D)u\) can be defined in the classical sense. One can again use interpolation techniques or one can directly work with Corollary 4.18 in order to obtain results for the Bessel potential or the Besov scale in normal direction. Note however that this comes with some restrictions on the weight \(|{\text {pr}}_n|^r\). As defined in Remark 5.3 we set $$\begin{aligned} k_{\max }:=\min \{\beta _n|\,\exists j\in \{1,\ldots ,m\}\exists \beta \in \mathbb {N}_0^n,|\beta |= m_j: b^j_{\beta }\ne 0\}. \end{aligned}$$ Let \(s\in \mathbb {R}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\), \(k\in [0,k_{\max }]\cap \mathbb {N}_0\) and \(f\in W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\). Let further \(s_j\in (s+2m+k-m_j-\frac{1+r}{p},\infty )\) and \(g_j\in \mathscr {A}^{s_j}\) \((j=1,\ldots ,m)\). Then, the equation $$\begin{aligned} \lambda u-A(D)u&=f\quad \;\text {in }\mathbb {R}^n_+,\\ B_j(D)u&=g_j\quad \text {on }\mathbb {R}^{n-1} \end{aligned}$$ has a unique solution $$\begin{aligned} u\in W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s) \cap W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m}) \end{aligned}$$ and for all \(\sigma >0\) there is a constant \(C>0\) such that for all \(\lambda \in \Sigma _{\phi }\) with \(|\lambda |\ge \sigma \) we have the estimate $$\begin{aligned}&\Vert u\Vert _{W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}+\Vert u\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m})}+|\lambda |\,\Vert u\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}\\&\quad \le C\left( \Vert f\Vert _{W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}+\sum _{j=1}^m|\lambda |^{\frac{-1-r+p(k+2m-m_j)}{2mp}}\Vert g\Vert _{\mathscr {A}^{s_j}}\right) . \end{aligned}$$ By Theorem 5.4, we have a unique solution $$\begin{aligned} u_1\in W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s) \cap W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m}) \end{aligned}$$ to the equation $$\begin{aligned} \lambda u_1-A(D)u_1&=f\quad \text {in }\mathbb {R}^n_+,\\ B_j(D)u_1&=0\quad \text {on }\mathbb {R}^{n-1} \end{aligned}$$ which satisfies the estimate $$\begin{aligned}&\Vert u_1\Vert _{W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}+\Vert u_1\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m})}\\&+|\lambda |\,\Vert u_1\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}\le C\Vert f\Vert _{W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}. \end{aligned}$$ By Remark 5.2(b), we do not need Pisier's property \((\alpha )\) for this. Moreover, by Theorem 6.1 the unique solution \(u_2\) to the equation $$\begin{aligned} \lambda u_2-A(D)u_2&=0\quad \;\text {in }\mathbb {R}^n_+,\\ B_j(D)u_2&=g_j\quad \text {on }\mathbb {R}^{n-1} \end{aligned}$$ satisfies the estimates $$\begin{aligned} \Vert u_2\Vert _{W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}&\le C \sum _{j=1}^m|\lambda |^{\frac{-1-r+p(k+2m-m_j)}{2mp}}\Vert g\Vert _{\mathscr {A}^{s_j}},\\ \Vert u_2\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m})}&\le C \sum _{j=1}^m|\lambda |^{\frac{-1-r+p(k-m_j)}{2mp}}\Vert g\Vert _{\mathscr {A}^{s_j}},\\ \Vert u_2\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s})}&\le C \sum _{j=1}^m|\lambda |^{\frac{-1-r+p(k-m_j)}{2mp}}\Vert g\Vert _{\mathscr {A}^{s_j}}. \end{aligned}$$ Note that by our choice of \(s_j\), we have $$\begin{aligned}&r-p[s+2m+k-m_j-s_j]_+>r\\&-p(s+2m+k-m_j-s-2m-k+m_j+\tfrac{1+r}{p})=-1 \end{aligned}$$ for \(s+2m+k-m_j-\frac{1+r}{p}<s_j\le s+2m+k-m_j\) and $$\begin{aligned} r-[s+2m+k-m_j-s_j]_+=r>-1 \end{aligned}$$ for \(s_j\ge s+2m+k-m_j\). The unique solution u of the full system is given by \(u=u_1+u_2\) and therefore summing up yields the assertion. \(\square \) Recall from Assumption 1.2 that \(\mathscr {C}\) stands for the Bessel potential, Besov, Triebel–Lizorkin or one of their dual scales and that we impose some conditions on the corresponding parameters. Let \(\sigma >0\), \(s_1,\ldots ,s_m,l_1,\ldots ,l_m\in \mathbb {R}\) and \(g_j\in \mathscr {C}^{l_j}(\mathbb {R}_+,w_2;\mathscr {A}^{s_j})\). Let further $$\begin{aligned} P_j=\{(r,t_0,l,k,p): t_0,l\in \mathbb {R},&r\in (-1,\infty ).k\in \mathbb {N}_0,p\in [1,\infty ),\\&r-p[t_0+k-m_j-s_j]_+>-1,\\&r-2mp(l-l_j)-p(k-m_j)-p[t_0-s_j]_+>-1\} \end{aligned}$$ the set of admissible parameters. Then, the equation $$\begin{aligned} \partial _t u +\sigma u- A(D) u&= 0\quad \;\text {in }\mathbb {R}\times \mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\mathbb {R}_+\times \mathbb {R}^{n-1}, \end{aligned}$$ has a unique solution \(u\in \mathscr {S}'(\mathbb {R}\times \mathbb {R}^n_+;E)\). This solution satisfies $$\begin{aligned} u\in \sum _{j=1}^m\bigcap _{(r,t_0,l,k,p)\in P_j}\mathscr {C}^{l}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})) \end{aligned}$$ and for all \((r,t_0,l,k,p)\in \bigcap _{j=1}^mP_j\) there is a constant \(C>0\) independent of \(g_1,\ldots ,g_m\) such that $$\begin{aligned} \Vert u\Vert _{\mathscr {C}^{l}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}))}\le C\sum _{j=1}^m\Vert g_j\Vert _{\mathscr {C}^{l_j}(\mathbb {R},w_2;\mathscr {A}^{s_j})}. \end{aligned}$$ We apply the Fourier transform \(\mathscr {F}_{t\mapsto \tau }\) in time to (6-1) and obtain $$\begin{aligned} (\sigma +i\tau ) \hat{u}- A(D) \hat{u}&= 0\quad \;\text {in }\mathbb {R}\times \mathbb {R}^n_+,\nonumber \\ B_j(D)\hat{u}&=\hat{g}_j\quad \text {on }\mathbb {R}_+\times \mathbb {R}^{n-1}. \end{aligned}$$ Hence, the solution of (6-1) is given by $$\begin{aligned} u(t,x)=\sum _{j=1}^m\mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j. \end{aligned}$$ From Theorem 4.16 together with Lemma 2.5, it follows that $$\begin{aligned}{}[\tau \mapsto {\text {Poi}}_j(\sigma +i\tau )]\in S^{\frac{-1-r+p(k-m_j)+p[{t_0}-s_j]_+}{2mp}+\varepsilon }_{\mathcal {R}}(\mathbb {R},\mathcal {B}(\mathscr {A}^{s_j},W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}))) \end{aligned}$$ for arbitrary \(\varepsilon >0\) if the parameters satisfy \(r-p[{t_0}+k-m_j-s_j]_+>-1\). Hence, the parameter-independent version of Proposition 3.6 (as in Remark 3.3 (g)) yields $$\begin{aligned}&\mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j\in \mathscr {C}^{l_j-\varepsilon +\frac{1+r-p(k-m_j)-p[{t_0}-s_j]_+}{2mp}}\\&\quad (\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})) \end{aligned}$$ as well as the estimate $$\begin{aligned}&\Vert \mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j\Vert _{\mathscr {C}^{l_j-\varepsilon +\frac{1+r-p(k-m_j)-p[{t_0}-s_j]_+}{2mp}}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})}\\&\quad \le C\Vert g_j\Vert _{\mathscr {C}^{l_j}(\mathbb {R},w_2;\mathscr {A}^{s_j})}. \end{aligned}$$ But the condition $$\begin{aligned} r-2mp(l-l_j)-p(k-m_j)-p[{t_0}-s_j]_+>-1 \end{aligned}$$ implies $$\begin{aligned} l\le l_j-\varepsilon +\frac{1+r-p(k-m_j)-p[{t_0}-s_j]_+}{2mp}, \end{aligned}$$ if \(\varepsilon >0\) is chosen small enough. Therefore, we obtain $$\begin{aligned} \mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j\in \mathscr {C}^{l}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})) \end{aligned}$$ and the estimate $$\begin{aligned} \Vert \mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j\Vert _{\mathscr {C}^{l}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})}\le C\Vert g_j\Vert _{\mathscr {C}^{l_j}(\mathbb {R},w_2;\mathscr {A}^{s_j})}. \end{aligned}$$ if \((r,{t_0},l,k,p)\in P_j\). Taking the sum over all \(j=1,\ldots ,m\) yields the assertion. \(\square \) If \(\mathscr {C}\) does not stand for the Bessel potential scale or if \(p>\max \{p_0,q_0,q_E\}\) where \(q_E\) denotes the cotype of E, then the parameter set \(P_j\) in Theorem 6.4 can potentially be chosen slightly larger, namely $$\begin{aligned} P_j=\{(r,{t_0},l,k,p): {t_0},l\in \mathbb {R},&r\in (-1,\infty ).k\in \mathbb {N}_0,p\in [1,\infty ),\\&r-p[{t_0}+k-m_j-s_j]_+>-1,\\&r-2mp(l-l_j)-p(k-m_j)-p[{t_0}-s_j]_+\ge -1\}. \end{aligned}$$ Indeed, if \(p>\max \{p_0,q_0,q_E\}\), then $$\begin{aligned}{}[\tau \mapsto {\text {Poi}}_j(\sigma +i\tau )]\in S^{\frac{-1-r+p(k-m_j)+p[{t_0}-s_j]_+}{2mp}}_{\mathcal {R}}(\mathbb {R},\mathcal {B}(\mathscr {A}^{s_j},W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}))) \end{aligned}$$ by Theorem 4.16. If one continues the proof of Theorem 6.4 with this information, then one will find that the \(\varepsilon \) in (6-4) can be removed so that the inequality (6-3) does not have to be strict. The same holds for Besov and Triebel–Lizorkin scale, as in this case $$\begin{aligned}{}[\tau \mapsto {\text {Poi}}_j(\sigma +i\tau )]\in S^{\frac{-1-r+p(k-m_j)+p[{t_0}-s_j]_+}{2mp}}(\mathbb {R},\mathcal {B}(\mathscr {A}^{s_j},W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}))) \end{aligned}$$ is good enough and holds without restriction on p. As in Remark 6.2, we can take the trace \({\text {tr}}_{x_n=0}B_j(D)u\) in the classical sense if k is large enough and if l and \({t_0}\) are small enough. Again, we can use interpolation techniques to extend the result in Theorem 6.4 to the case in which the Bessel potential or Besov scale are taken in normal direction. However, this can only be done for \(r\in (-1,p-1)\). Let \(\alpha \in (0,1)\), \(T>0\), \(s,t_0\in \mathbb {R}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\), \(\mu \in (-1,\infty )\), \(v_{\mu }(t)=t^{\mu }\) \((t\in (0,T])\) and \(s_1,\ldots ,s_m,l_1,\ldots ,l_m\in \mathbb {R}\). Assume that \(\mu \in (-1,q_2)\) if \(\mathscr {C}\) belongs to the Bessel potential scale. Let again and \(k\in [0,k_{\max }]\cap \mathbb {N}_0\). We further assume that $$\begin{aligned} l_j>\frac{1+\mu }{q_2}-\frac{1+r}{2mp}+\frac{k-m_j+[t_0-s_j]_+}{2m}\quad \text {and}\quad s_j>t_0+k-m_j-\frac{1+r}{p} \end{aligned}$$ for all \(j=1,\ldots ,m\). Suppose that E satisfies Pisier's property \((\alpha )\). Then, for all \(u_0\in H_p^k(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})\), all \(\alpha \)-Hölder continuous \(f\in C^{\alpha }((0,T);H_p^k(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t_0}))\) with \(\alpha \in (0,1)\) and \(g_j\in \mathscr {C}^{l_j}([0,T],v_{\mu };\mathscr {A}^{s_j})\) there is a unique solution u of the equation $$\begin{aligned} \partial _t u- A(D) u&=f\quad \;\text {in }(0,T]\times \mathbb {R}^{n}_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }(0,T]\times \mathbb {R}^{n-1},\nonumber \\ u(0,\,\cdot \,)&=u_0 \end{aligned}$$ which satisfies $$\begin{aligned} u&\in C([0,T];H^{k}_{p}(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})),\\ u&\in \mathscr {C}^{l^*}((0,T],v_{\mu };H^{k+2m}_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0-2m})),\\ u&\in C^1((0,T];H^{k}_{p}([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})),\\ u&\in C((0,T];H^{k+2m}_p([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})\cap H^k_p([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0+2m})) \end{aligned}$$ for all \(\delta >0\) and some \(l^*\in \mathbb {R}\). First, we substitute \(v(t,\,\cdot \,)=e^{-\sigma t}u(t,\,\cdot \,)\) for some \(\sigma >0\). Since we work on a bounded time interval [0, T], this multiplication is an automorphism of all the spaces we consider in this theorem. Hence, it suffices to look for a solution of the equation $$\begin{aligned} \partial _t v+\sigma v- A(D) v&=\widetilde{f}\quad \;\text {in }[0,T]\times \mathbb {R}^{n}_+,\\ B_j(D)v&=\widetilde{g}_j\quad \text {on }[0,T]\times \mathbb {R}^{n-1},\\ v(0,\,\cdot \,)&=u_0, \end{aligned}$$ where \(\widetilde{f}(t)=e^{-\sigma t} f(t)\) and \(\widetilde{g}_j(t)=e^{-\sigma t} g_j(t)\). We split v into two parts \(v=r_{[0,T]}v_1+v_2\) which are defined as follows: \(v_1\) solves the equation $$\begin{aligned} \partial _t v_1+\sigma v_1- A(D) v_1&=0\quad \;\text {in }\mathbb {R}\times \mathbb {R}^{n}_+,\\ B_j(D)v_1&=\mathscr {E}\widetilde{g}_j\quad \text {on }\mathbb {R}\times \mathbb {R}^{n-1}, \end{aligned}$$ where \(\mathscr {E}\) is a suitable extension operator and \(r_{[0,T]}\) is the restriction to [0, T]. Moreover, \(v_2\) is the solution of $$\begin{aligned} \partial _t v_2+\sigma v_2- A(D) v_2&=\widetilde{f}\quad \;\text {in }[0,T]\times \mathbb {R}^{n}_+,\nonumber \\ B_j(D)v_2&=0\quad \text {on }[0,T]\times \mathbb {R}^{n-1},\nonumber \\ v_2(0,\,\cdot \,)&=v_0-v_1(0,\,\cdot \,). \end{aligned}$$ For \(v_1\), it follows from Theorem 6.4 that $$\begin{aligned} v_1\in \sum _{j=1}^m\bigcap _{(r',t',l',k',p')\in P_j}\mathscr {C}^{l'}(\mathbb {R},v_{\mu };W^{k'}_{p'}(\mathbb {R}_+,|{\text {pr}}_n|^{r'};\mathscr {A}^{t'})) \end{aligned}$$ and for all \((r',t',l',k',p')\in \bigcap _{j=1}^mP_j\) there is a constant \(C>0\) independent of \(g_1,\ldots ,g_m\) such that $$\begin{aligned} \Vert v_1\Vert _{\mathscr {C}^{l'}(\mathbb {R},v_{\mu };W^{k'}_{p'}(\mathbb {R}_+,|{\text {pr}}_n|^{r'};\mathscr {A}^{t'}))}\le C\sum _{j=1}^m\Vert \mathscr {E}\widetilde{g}_j\Vert _{\mathscr {C}^{l_j}(\mathbb {R},v_{\mu };\mathscr {A}^{s_j})}. \end{aligned}$$ In particular, if \(l'>\frac{1+\mu }{q_2}\) we have \(v_1\in BUC([0,T];W^{k'}_{p'}(\mathbb {R}_+,|{\text {pr}}_n|^{r'};\mathscr {A}^{t'}))\) so that we can take the time trace \(v_1(0)\), see [37, Proposition 7.4]. If condition (6-5) is satisfied, then we can choose \(l>\frac{1+\mu }{q_2}\) small enough such that \((r,t_0,l,k,p)\in \bigcap _{j=1}^mP_j\). Hence, under this condition we obtain $$\begin{aligned} v_1(0)\in W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}) \end{aligned}$$ is well defined. But since we have \(r\in (-1,p-1)\), it follows from Theorem 5.4 that \(A_B-\sigma \) generates a holomorphic \(C_0\)-semigroup \((T(t))_{t\ge 0}\) in \(W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})\). In addition, \(v_2\) is given by $$\begin{aligned} v_2(t)=T(t)[v_0-v_1(0,\,\cdot \,)]+\int _{0}^t T(t-s) f(s)\,ds. \end{aligned}$$ It follows from standard semigroup theory that $$\begin{aligned} v_2\in C((0,T];D(A_B))\cap C^1((0,T];X)\cap C([0,T];X), \end{aligned}$$ $$\begin{aligned}&X=H^{k}_{p}(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0}),\quad D(A_B)=D^{k,2m,s}_{r,B}(\mathbb {R}_+)\\&\hookrightarrow H^{k+2m}_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})\cap H^k_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{{t_0}+2m}), \end{aligned}$$ see for example [39, Chapter 4, Corollary 3.3]. Since also \(v_1\in BUC([0,T];W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t_0}))\), we obtain $$\begin{aligned} u\in C([0,T];H^{k}_{p}(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})), \end{aligned}$$ and, since \(v_1\) is arbitrarily smooth away from the boundary, also $$\begin{aligned} u&\in C^1((0,T];H^{k}_{p}([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})),\\ u&\in C((0,T];H^{k+2m}_p([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})\cap H^k_p([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0+2m})) \end{aligned}$$ for all \(\delta >0\). Concerning the value of \(l^*\), we note that if \((r,{t_0},l,k,p)\in \bigcap _{j=1}^mP_j\), then also \((r,{t_0}-2m,l-1,k+2m,p)\in \bigcap _{j=1}^mP_j\). Hence, we just have to take \(l^*\le l-1\) such that $$\begin{aligned}&C((0,T];H^{k+2m}_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0}))\\&\hookrightarrow \mathscr {C}^{l^*}((0,T],v_{\mu };H^{k+2m}_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0-2m})). \end{aligned}$$ Altogether, this finishes the proof. \(\square \) While we can treat arbitrary space regularity of the boundary data in Theorem 6.6, it is important to note that (6-5) poses a restriction on the time regularity of the boundary data. Even if we take \({t_0}\le \min _{j=1,\ldots ,m} s_j\), \(k=0\), r very close to \(p-1\) and \(q_2\) very large, we still have the restriction $$\begin{aligned} l_j>-\frac{1+m_j}{2m}. \end{aligned}$$ In the case of the heat equation with Dirichlet boundary conditions, this would mean that the boundary data needs to have a time regularity strictly larger than \(-\frac{1}{2}\). Having boundary noise in mind, it would be interesting to go beyond this border. It would need further investigation whether this is possible or not. In fact, (6-5) gives a restriction on the time regularity only because we do not allow \(r\ge p-1\), otherwise we could just take r very large and allow arbitrary regularity in time. The reason why we have to restrict to \(r<p-1\) is that we want to apply the semigroup to the time trace \(v_1(0)\). However, until now we can only do this for \(r\in (-1,p-1)\). Hence, if one wants to improve Theorem 6.6 to the case of less time regularity, there are at least two possible directions: One could try to generalize Theorem 5.4 to the case in which \(r>p-1\). In fact, in [32] Lindemulder and Veraar derive a bounded \(\mathcal {H}^{\infty }\)-calculus for the Dirichlet Laplacian in weighted \(L_p\)-spaces with power weights of order \(r\in (-1,2p-1){\setminus }\{p-1\}\). It would be interesting to see whether their methods also work for \(L_p(\mathbb {R}_+,|{\text {pr}}|^r;\mathscr {A}^s)\) with \(r\in (p-1,2p-1)\). One could try to determine all initial data \(u_0\) which is given by \(u_0=\widetilde{u}_0+v_1(0)\) where \(\widetilde{u}_0\in H^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^t)\) and \(v_1\) is the solution to $$\begin{aligned} \partial _t v_1+\sigma v_1- A(D) v_1&=0\quad \;\text {in }\mathbb {R}\times \mathbb {R}^{n}_+,\\ B_j(D)v_1&=\widetilde{g}_j\quad \text {in }\mathbb {R}\times \mathbb {R}^{n}_+, \end{aligned}$$ for some \(\widetilde{g}_j\in \mathscr {C}^{l_j}(\mathbb {R},v_{\mu };\mathscr {A}^{s_j})\) satisfying \(\widetilde{g}_j\vert _{[0,T]}=g_j\). For such initial data, the initial boundary value problem can be solved with our methods for arbitrary time regularity of the boundary data. Indeed, in this case we just have to take the right extension of \(g_j\) so that \(u_0-v_1(0)\in H^k_p(\mathbb {R}_+,|{\text {pr}}|^r;\mathscr {A}^t)\). Then, we can just apply the semigroup in order to obtain the solution of (6-7). E. Alòs and S. Bonaccorsi. Stochastic partial differential equations with Dirichlet white-noise boundary conditions. Ann. Inst. H. Poincaré Probab. Statist., 38(2):125–154, 2002. H. Amann. Navier-Stokes equations with nonhomogeneous Dirichlet data. J. Nonlinear Math. Phys., 10(suppl. 1):1–11, 2003. A. Anop, R. Denk, and A. Murach. Elliptic problems with rough boundary data in generalized sobolev spaces. arXiv preprintarXiv:2003.05360, 2020. S. Aziznejad and J. Fageot. Wavelet Analysis of the Besov Regularity of Lévy White Noises. arXiv preprintarXiv:1801.09245v2, 2020. Z. Brzeźniak, B. Goldys, S. Peszat, and F. Russo. Second order PDEs with Dirichlet white noise boundary conditions. J. Evol. Equ., 15(1):1–26, 2015. G. Da Prato and J. Zabczyk. Evolution equations with white-noise boundary conditions. Stochastics Stochastics Rep., 42(3-4):167–182, 1993. R. Denk, G. Dore, M. Hieber, J. Prüss, and A. Venni. New thoughts on old results of R. T. Seeley. Math. Ann., 328(4):545–583, 2004. R. Denk, M. Hieber, and J. Prüss. \(\mathscr {R}\)-boundedness, Fourier multipliers and problems of elliptic and parabolic type. Mem. Amer. Math. Soc., 166(788):viii+114, 2003. R. Denk, M. Hieber, and J. Prüss. Optimal \(L^p\)-\(L^q\)-estimates for parabolic boundary value problems with inhomogeneous data. Math. Z., 257(1):193–224, 2007. R. Denk and T. Krainer. \(\mathscr {R}\)-boundedness, pseudodifferential operators, and maximal regularity for some classes of partial differential operators. Manuscripta Math., 124(3):319–342, 2007. R. Denk, J. Prüss, and R. Zacher. Maximal \(L_p\)-regularity of parabolic problems with boundary dynamics of relaxation type. J. Funct. Anal., 255(11):3149–3187, 2008. G. Dore and A. Venni. On the closedness of the sum of two closed operators. Math. Z., 196(2):189–201, 1987. G. Dore and A. Venni. \(H^\infty \) functional calculus for an elliptic operator on a half-space with general boundary conditions. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 1(3):487–543, 2002. X. T. Duong. \(H_\infty \) functional calculus of elliptic operators with \(C^\infty \) coefficients on \(L^p\) spaces of smooth domains. J. Austral. Math. Soc. Ser. A, 48(1):113–123, 1990. S. Fackler, T. P. Hytönen, and N. Lindemulder. Weighted estimates for operator-valued fourier multipliers. Collect. Math. 71(3):511–548, 2020. J. Fageot, A. Fallah, and M. Unser. Multidimensional Lévy white noise in weighted Besov spaces. Stochastic Process. Appl., 127(5):1599–1621, 2017. L. Grafakos. Classical Fourier analysis, volume 249 of Graduate Texts in Mathematics. Springer, New York, second edition, 2008. L. Grafakos. Modern Fourier analysis, volume 250 of Graduate Texts in Mathematics. Springer, New York, second edition, 2009. G. Grubb. Nonhomogeneous Dirichlet Navier-Stokes problems in low regularity \(L_p\) Sobolev spaces. J. Math. Fluid Mech., 3(1):57–81, 2001. B. H. Haak, M. Haase, and P. C. Kunstmann. Perturbation, interpolation, and maximal regularity. Adv. Differential Equations, 11(2):201–240, 2006. M. Hieber and J. Prüss. Heat kernels and maximal \(L^p\)-\(L^q\) estimates for parabolic evolution equations. Comm. Partial Differential Equations, 22(9-10):1647–1669, 1997. F. Hummel and N. Lindemulder. Elliptic and parabolic boundary value problems in weighted function spaces. arXiv preprintarXiv:1911.04884v1, 2019. T. Hytönen, J. van Neerven, M. Veraar, and L. Weis. Analysis in Banach spaces. Vol. I. Martingales and Littlewood-Paley theory, volume 63 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Cham, 2016. T. Hytönen, J. van Neerven, M. Veraar, and L. Weis. Analysis in Banach spaces. Vol. II, volume 67 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Cham, 2017. Probabilistic methods and operator theory. T. Hytönen and M. Veraar. \(R\)-boundedness of smooth operator-valued functions. Integral Equations Operator Theory, 63(3):373–402, 2009. M. Kabanava. Tempered Radon measures. Rev. Mat. Complut., 21(2):553–564, 2008. M. Kaip and J. Saal. The permanence of \(\mathscr {R}\)-boundedness and property\((\alpha )\) under interpolation and applications to parabolic systems. J. Math. Sci. Univ. Tokyo, 19(3):359–407, 2012. A. Kufner and B. Opic. How to define reasonably weighted Sobolev spaces. Comment. Math. Univ. Carolin., 25(3):537–554, 1984. P. C. Kunstmann and L. Weis. Maximal \(L_p\)-regularity for parabolic equations, Fourier multiplier theorems and \(H^\infty \)-functional calculus. In Functional analytic methods for evolution equations, volume 1855 of Lecture Notes in Math., pages 65–311. Springer, Berlin, 2004. N. Lindemulder. Second order operators subject to dirichlet boundary conditions in weighted triebel-lizorkin spaces: Parabolic problems. arXiv preprintarXiv:1812.05462, 2018. N. Lindemulder, M. Meyries, and M. Veraar. Complex interpolation with Dirichlet boundary conditions on the half line. Math. Nachr., 291(16):2435–2456, 2018. N. Lindemulder and M. Veraar. The heat equation with rough boundary conditions and holomorphic functional calculus. J. Differ. Equ. 269(7):5832–5899, 2020. J.-L. Lions and E. Magenes. Non-homogeneous boundary value problems and applications. Vol. I. Springer-Verlag, New York-Heidelberg, 1972. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 181. J.-L. Lions and E. Magenes. Non-homogeneous boundary value problems and applications. Vol. II. Springer-Verlag, New York-Heidelberg, 1972. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 182. J.-L. Lions and E. Magenes. Non-homogeneous boundary value problems and applications. Vol. III. Springer-Verlag, New York-Heidelberg, 1973. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 183. M. Meyries. Maximal regularity in weighted spaces, nonlinear boundary conditions, and global attractors. PhD thesis, Karlsruhe Institute of Technology, 2010. M. Meyries and M. Veraar. Sharp embedding results for spaces of smooth functions with power weights. Studia Math., 208(3):257–293, 2012. J. Milnor. Lectures on the\(h\)-cobordism theorem. Notes by L. Siebenmann and J. Sondow. Princeton University Press, Princeton, N.J., 1965. A. Pazy. Semigroups of linear operators and applications to partial differential equations, volume 44 of Applied Mathematical Sciences. Springer-Verlag, New York, 1983. P. Portal and v. Štrkalj. Pseudodifferential operators on Bochner spaces and an application. Math. Z., 253(4):805–819, 2006. V. S. Rychkov. Littlewood-Paley theory and function spaces with \(A^{\rm loc}_p\) weights. Math. Nachr., 224:145–180, 2001. H.-J. Schmeisser and H. Triebel. Topics in Fourier analysis and function spaces. A Wiley-Interscience Publication. John Wiley & Sons, Ltd., Chichester, 1987. R. T. Seeley. Extension of \(C^{\infty }\) functions defined in a half space. Proc. Amer. Math. Soc., 15:625–626, 1964. H. Triebel. Interpolation theory, function spaces, differential operators, volume 18 of North-Holland Mathematical Library. North-Holland Publishing Co., Amsterdam-New York, 1978. H. Triebel. Theory of function spaces, volume 78 of Monographs in Mathematics. Birkhäuser Verlag, Basel, 1983. R. M. Trigub and E. S. Bellinsky. Fourier analysis and approximation of functions. Kluwer Academic Publishers, Dordrecht, 2004. [Belinsky on front and back cover]. M. C. Veraar. Regularity of Gaussian white noise on the \(d\)-dimensional torus. In Marcinkiewicz centenary volume, volume 95 of Banach Center Publ., pages 385–398. Polish Acad. Sci. Inst. Math., Warsaw, 2011. J. Voigt. Abstract Stein interpolation. Math. Nachr., 157:197–199, 1992. L. Weis. Operator-valued Fourier multiplier theorems and maximal \(L_p\)-regularity. Math. Ann., 319(4):735–758, 2001. Since most of the material in this work is based on some results of my Ph.D. thesis and just contains generalizations, simplifications and corrections of mistakes, I would like to thank my Ph.D. supervisor Robert Denk again for his outstanding supervision. I thank Mark Veraar for the instructive discussion on the necessity of finite cotype in certain estimates, which helped me to prove Proposition 4.13. I also thank the Studienstiftung des deutschen Volkes for the scholarship during my doctorate and the EU for the partial support within the TiPES project funded by the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 820970. Moreover, I acknowledge partial support of the SFB/TR109 "Discretization in Geometry and Dynamics." Open Access funding enabled and organized by Projekt DEAL. Faculty of Mathematics, Technical University of Munich, Boltzmannstraße 3, 85748, Garching bei München, Germany Felix Hummel Correspondence to Felix Hummel. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Hummel, F. Boundary value problems of elliptic and parabolic type with boundary data of negative regularity. J. Evol. Equ. 21, 1945–2007 (2021). https://doi.org/10.1007/s00028-020-00664-0 Issue Date: June 2021 Mathematics Subject Classification Primary: 35B65 Secondary: 35K52 Boundary data of negative regularity Mixed scales Mixed smoothness Poisson operators Singularities at the boundary
CommonCrawl
export.arXiv.org > physics > physics.soc-ph Physics and Society Title: A phenomenological estimate of the Covid-19 true scale from primary data Authors: Luigi Palatella, Fabio Vanni, David Lambert Subjects: Physics and Society (physics.soc-ph) Estimation of prevalence of undocumented SARS-CoV-2 infections is critical for understanding the overall impact of the Covid-19 disease. In fact, unveiling uncounted cases has fundamental implications for public policy interventions strategies. In the present work, we show a basic yet effective approach to estimate the actual number of people infected by Sars-Cov-2, by using epidemiological raw data reported by official health institutions in the largest EU countries and USA. Title: Should the government reward cooperation? Insights from an agent-based model of wealth redistribution Authors: Frank Schweitzer, Luca Verginer, Giacomo Vaccario Subjects: Physics and Society (physics.soc-ph); Multiagent Systems (cs.MA); General Economics (econ.GN); Adaptation and Self-Organizing Systems (nlin.AO) In our multi-agent model agents generate wealth from repeated interactions for which a prisoner's dilemma payoff matrix is assumed. Their gains are taxed by a government at a rate $\alpha$. The resulting budget is spent to cover administrative costs and to pay a bonus to cooperative agents, which can be identified correctly only with a probability $p$. Agents decide at each time step to choose either cooperation or defection based on different information. In the local scenario, they compare their potential gains from both strategies. In the global scenario, they compare the gains of the cooperative and defective subpopulations. We derive analytical expressions for the critical bonus needed to make cooperation as attractive as defection. We show that for the local scenario the government can establish only a medium level of cooperation, because the critical bonus increases with the level of cooperation. In the global scenario instead full cooperation can be achieved once the cold-start problem is solved, because the critical bonus decreases with the level of cooperation. This allows to lower the tax rate, while maintaining high cooperation. Title: Re-examining the Role of Nuclear Fusion in a Renewables-Based Energy Mix Authors: T. E. G. Nicholas, T. P. Davis, F. Federici, J. E. Leland, B. S. Patel, C.Vincent, S. H. Ward Comments: 31 pages, 3 figures, submitted to Energy Policy Journal-ref: Energy Policy, Volume 149, February 2021, 112043 Fusion energy is often regarded as a long-term solution to the world's energy needs. However, even after solving the critical research challenges, engineering and materials science will still impose significant constraints on the characteristics of a fusion power plant. Meanwhile, the global energy grid must transition to low-carbon sources by 2050 to prevent the worst effects of climate change. We review three factors affecting fusion's future trajectory: (1) the significant drop in the price of renewable energy, (2) the intermittency of renewable sources and implications for future energy grids, and (3) the recent proposition of intermediate-level nuclear waste as a product of fusion. Within the scenario assumed by our premises, we find that while there remains a clear motivation to develop fusion power plants, this motivation is likely weakened by the time they become available. We also conclude that most current fusion reactor designs do not take these factors into account and, to increase market penetration, fusion research should consider relaxed nuclear waste design criteria, raw material availability constraints and load-following designs with pulsed operation. [4] arXiv:2101.05510 (cross-list from cs.SI) [pdf, other] Title: Signal Processing on Higher-Order Networks: Livin' on the Edge ... and Beyond Authors: Michael T. Schaub, Yu Zhu, Jean-Baptiste Seby, T. Mitchell Roddenberry, Santiago Segarra Comments: 38 pages; 7 figures Subjects: Social and Information Networks (cs.SI); Machine Learning (cs.LG); Physics and Society (physics.soc-ph); Machine Learning (stat.ML) This tutorial paper presents a didactic treatment of the emerging topic of signal processing on higher-order networks. Drawing analogies from discrete and graph signal processing, we introduce the building blocks for processing data on simplicial complexes and hypergraphs, two common abstractions of higher-order networks that can incorporate polyadic relationships.We provide basic introductions to simplicial complexes and hypergraphs, making special emphasis on the concepts needed for processing signals on them. Leveraging these concepts, we discuss Fourier analysis, signal denoising, signal interpolation, node embeddings, and non-linear processing through neural networks in these two representations of polyadic relational structures. In the context of simplicial complexes, we specifically focus on signal processing using the Hodge Laplacian matrix, a multi-relational operator that leverages the special structure of simplicial complexes and generalizes desirable properties of the Laplacian matrix in graph signal processing. For hypergraphs, we present both matrix and tensor representations, and discuss the trade-offs in adopting one or the other. We also highlight limitations and potential research avenues, both to inform practitioners and to motivate the contribution of new researchers to the area. [5] arXiv:2101.05561 (cross-list from cond-mat.stat-mech) [pdf, other] Title: Measles-induced immune amnesia and its effects in concurrent epidemics Authors: Guillermo B. Morales, Miguel A. Muñoz Subjects: Statistical Mechanics (cond-mat.stat-mech); Physics and Society (physics.soc-ph); Populations and Evolution (q-bio.PE) It has been recently discovered that the measles virus can wipe out the adaptive immune system, destroying B lymphocytes and reducing the diversity of non-specific B cells of the infected host. In particular, this implies that previously acquired immunization from vaccination or direct exposition to other pathogens could be erased in a phenomenon named "immune amnesia", whose effects can become particularly worrisome given the actual rise of anti-vaccination movements. Here we present the first attempt to incorporate immune amnesia into standard models of epidemic spreading. In particular, we analyze diverse variants of a model that describes the spreading of two concurrent pathogens causing measles and another generic disease: the SIR-IA model. Analytical and computational studies confirm that immune amnesia can indeed have important consequences for epidemic spreading, significantly altering the vaccination coverage required to reach herd-immunity for concurring infectious diseases. More specifically, we uncover the existence of novel propagating and endemic phases which are induced by immune amnesia, that appear both in fully-connected and more structured networks, such as random networks and power-law degree-distributed ones. In particular, the transitions from a quiescent state into these novel phases can become rather abrupt in some cases that we specifically analyze. Furthermore, we discuss the meaning and consequences of our results and their relation with, e.g., immunization strategies, together with the possibility that explosive types of transitions may emerge, making immune-amnesia effects particularly dramatic. This work opens the door to further developments and analyses of immune amnesia effects, contributing, more generally, to the theory of interacting epidemics on complex networks. [6] arXiv:2011.08103 (replaced) [pdf, other] Title: Model-free hidden geometry of complex networks Authors: Yi-Jiao Zhang, Kai-Cheng Yang, Filippo Radicchi Journal-ref: Phys. Rev. E 103, 012305 (2021) [7] arXiv:2005.11580 (replaced) [src] Title: Evolution of Cooperative Hunting in Artificial Multi-layered Societies Authors: Honglin Bao, Wolfgang Banzhaf Comments: Conflict of interest with my previous collaborators. Thus, we retract the preprint. We retract all previous versions of the paper as well, but due to the arXiv policy, previous versions cannot be removed. We ask that you ignore earlier versions and do not refer to or distribute them further. Thanks Subjects: Computers and Society (cs.CY); Neural and Evolutionary Computing (cs.NE); Adaptation and Self-Organizing Systems (nlin.AO); Physics and Society (physics.soc-ph) Title: Renewable Power Trades and Network Congestion Externalities Authors: Nayara Aguiar, Indraneel Chakraborty, Vijay Gupta Subjects: Systems and Control (eess.SY); General Economics (econ.GN); Physics and Society (physics.soc-ph) Links to: arXiv, form interface, find, physics, recent, 2101, contact, help (Access key information)
CommonCrawl
Pattern formation in diffusive predator-prey systems with predator-taxis and prey-taxis DCDS-B Home March 2021, 26(3): 1243-1272. doi: 10.3934/dcdsb.2020161 Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows Dongfen Bian 1,2, and Yao Xiao 3, School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China Beijing Key Laboratory on MCAACI, Beijing Institute of Technology, Beijing 100081, China The Institute of Mathematical Sciences, The Chinese University of Hong Kong, Hong Kong Received February 2018 Revised March 2020 Published May 2020 In this paper, we consider the initial-boundary value problem to the non-isothermal incompressible liquid crystal system with both variable density and temperature. Global well-posedness of strong solutions is established for initial data being small perturbation around the equilibrium state. As the tools in the proof, we establish the maximal regularities of the linear Stokes equations and parabolic equations with variable coefficients and a rigid lemma for harmonic maps on bounded domains. This paper also generalizes the result in [5] to the inhomogeneous case. Keywords: Nematic liquid crystal, strong solution, local solution, global solution, maximal regularity, inhomogeneous case. Mathematics Subject Classification: Primary: 35B35, 35B40, 35B65, 35Q35, 76D03. Citation: Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161 H. Abels, Nonstationary Stokes system with variable viscosity in bounded and unbounded domains, Discrete Contin. Dyn. Syst. Ser. S, 3 (2010), 141-157. doi: 10.3934/dcdss.2010.3.141. Google Scholar H. Abels and Y. Terasawa, On Stokes operators with variable viscosity in bounded and unbounded domains, Math. Ann., 344 (2009), 381-429. doi: 10.1007/s00208-008-0311-7. Google Scholar R. A. Adams and J. F. Fournier, Sobolev Spaces, Second edition, Pure and Applied Mathematics (Amsterdam), 140. Elsevier/Academic Press, Amsterdam, 2003. Google Scholar H. Amann, Linear and Quasilinear Parabolic Problems. Vol. I. Abstract Linear Theory, Monographs in Mathematics, 89. Birkhäuser Boston, Inc., Boston, MA, 1995. doi: 10.1007/978-3-0348-9221-6. Google Scholar D. Bian and Y. Xiao, Global solution to the nematic liquid crystal flows with heat effect, J. Differential Equations, 263 (2017), 5298-5329. doi: 10.1016/j.jde.2017.06.019. Google Scholar D. Bothe and J. Prüss, $L_p$-theory for a class of Non-Newtonian fluids, SIAM J. Math. Anal., 39 (2007), 379-421. doi: 10.1137/060663635. Google Scholar R. Danchin, Density-dependent incompressible fluids in bounded domains, J. Math. Fluid Mech., 8 (2006), 333-381. doi: 10.1007/s00021-004-0147-1. Google Scholar R. Denk, M. Hieber and J. Prüss, R-boundedness, Fourier multipliers and problems of elliptic and parabolic type, Mem. Amer. Math. Soc., 166 (2003). doi: 10.1090/memo/0788. Google Scholar W.-Y. Ding and F.-H. Lin, A generalization of Eells-Sampson's theorem, J. Partial Differential Equations, 5 (1992), 13-22. Google Scholar S. J. Ding, C. Y. Wang and H. Y. Wen, Weak solutions to compressible flows of nematic liquid crystals in dimension one, Discrete Contin. Dyn. Syst. B, 15 (2011), 357-371. doi: 10.3934/dcdsb.2011.15.357. Google Scholar J. L. Ericksen, Conservation laws for liquid crystals, Trans. Soc. Rheology, 5 (1961), 23-34. doi: 10.1122/1.548883. Google Scholar J. L. Ericksen, Continuum theory of nematic liquid crystals, Molecular Crystals, 7 (1969), 153-164. doi: 10.1080/15421406908084869. Google Scholar E. Feireisl, M. Frémond, E. Rocca and G. Schimperna, A new approach to non-isothermal models for nematic liquid crystals, Arch. Rational Mech. Anal., 205 (2012), 651-672. doi: 10.1007/s00205-012-0517-4. Google Scholar E. Feireisl, E. Rocca and G. Schimperna, On a non-isothermal model for the nematic liquid crystals, Nonlinearity, 24 (2011), 243-257. doi: 10.1088/0951-7715/24/1/012. Google Scholar G. P. Galdi, An Introduction to the Mathematical Theory of the Nevier-Stokes Equations. Vol. I. Linearized Steady Problems, Springer Tracts in Natural Philosophy, 38. Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4612-5364-8. Google Scholar J. C. Gao, Q. Tao and Z.-A. Yao, Long-time behavior of solution for the compressible nematic liquid crystal flows in $\mathbb{R}^3$, J. Differential Equations, 261 (2016), 2334-2383. doi: 10.1016/j.jde.2016.04.033. Google Scholar P. G. de Gennes and J. Prost, The Physics of Liquid Crystals, New York: Oxford University Press, 1993. Google Scholar B. L. Guo, X. Y. Xi and B. Q. Xie, Global well-posedness and decay of smooth solutions to the non-isothermal model for compressible nematic liquid crystals, J. Differential Equations, 262 (2017), 1413-1460. doi: 10.1016/j.jde.2016.10.015. Google Scholar M. Hieber, M. Nesensohn, J. Prüss and K. Schade, Dynamics of nematic liquid crystal flows: The quasilinear approach, Ann. Ints. H. Poincaré, Analyse Nonlinéaire, 33 (2016), 397-408. doi: 10.1016/j.anihpc.2014.11.001. Google Scholar M. Hieber and J. Prüss, Dynamics of the Ericksen-Leslie equations with general Leslie stress. I: The incompressible isotropic case, Math. Ann., 369 (2017), 977-996. doi: 10.1007/s00208-016-1453-7. Google Scholar M. Hieber and J. W. Prüss, Modeling and analysis of the Ericksen-Leslie equations for nematic liquid crystal flows, Handbook of Mathematical Analysis in Mechanics of Viscous Fluids, Springer, Cham, (2016), 1075–1134. Google Scholar M.-C. Hong, Global existence of solutions of the simplified Ericksen-Leslie system in dimension two, Calc. Var. Partial Differential Equations, 40 (2011), 15-36. doi: 10.1007/s00526-010-0331-5. Google Scholar M.-C. Hong, J. K. Li and Z. P. Xin, Blow-up criteria of strong solutions to the Ericksen-Leslie system in $\mathbb{R}^3$, Comm. Partial Differential Equations, 39 (2014), 1284-1328. doi: 10.1080/03605302.2013.871026. Google Scholar M.-C. Hong and Z. P. Xin, Global existence of solutions of the nematic liquid crystal flow for the Oseen-Frank model $\mathbb{R}^2$, Adv. Math., 231 (2012), 1364-1400. doi: 10.1016/j.aim.2012.06.009. Google Scholar X. P. Hu and D. H. Wang, Global solution to the three-dimensional incompressible flow of liquid crystals, Commun. Math. Phys., 296 (2010), 861-880. doi: 10.1007/s00220-010-1017-8. Google Scholar X. P. Hu and H. Wu, Global solution to the three-dimensional compressible flow of liquid crytals, SIAM J. Math. Anal., 45 (2013), 2678-2699. doi: 10.1137/120898814. Google Scholar J. R. Huang, F.-H. Lin and C. Y. Wang, Regularity and existence of global solutions to the Ericksen-Leslie system in $\mathbb{R}^2$, Commun. Math. Phys., 331 (2014), 805-850. doi: 10.1007/s00220-014-2079-9. Google Scholar F. Jiang, S. Jiang and D. H. Wang, On multi-dimensional compressible flows of nematic liquid crystals with large initial energy in a bounded domain, J. Funct. Anal., 265 (2013), 3369-3397. doi: 10.1016/j.jfa.2013.07.026. Google Scholar Z. Lei, D. Li and X. Y. Zhang, Remarks of global wellposedness of liquid crystal flows and heat flows of harmonic maps in tow dimensions, Proc. Amer. Math. Soc., 142 (2014), 3801-3810. doi: 10.1090/S0002-9939-2014-12057-0. Google Scholar F. M. Leslie, Some constitutive equations for liquid crystals, Arch. Rational Mech. Anal., 28 (1968), 265-283. doi: 10.1007/BF00251810. Google Scholar F. Leslie, Theory of Flow Phenomenum in Liquid Crystals, The Theory of Liquid Crystals, 4. London-New York, Academic Press, 1979, 1–81. Google Scholar J. K. Li, Global strong and weak solutions to inhomogeneous nematic liquid crystal flows in two dimensions, Nonlinear Anal., 99 (2014), 80-94. doi: 10.1016/j.na.2013.12.023. Google Scholar J. K. Li, E. S. Titi and Z. P. Xin, On the uniqueness of weak solutions to the Ericksen-Leslie liquid crystal model in $\mathbb{R}^2$, Math. Models Meth. Appl. Sci., 26 (2016), 803-822. doi: 10.1142/S0218202516500184. Google Scholar J. K. Li and Z. P. Xin, Global existence of weak solutions to the non-isothermal nematic liquid crystals in 2D, Acta Math. Sci. Ser. B Engl. Ed., 36 (2016), 973-1014. doi: 10.1016/S0252-9602(16)30054-6. Google Scholar F.-H. Lin, Nonlinear theory of defects in nematic liquid crystals: Phase transition and flow phenomena, Comm. Pure Appl. Math., 42 (1989), 789-814. doi: 10.1002/cpa.3160420605. Google Scholar F.-H. Lin and C. Liu, Nonparabolic dissipative systems modeling the flow of liquid crystals, Comm. Pure. Appl. Math., 48 (1995), 501-537. doi: 10.1002/cpa.3160480503. Google Scholar F.-H. Lin and C. Liu, Partial regularity of the nonlinear dissipative system modeling the flow of nematic liquid crystals, Discrete Comtin. Dyn. Syst., 2 (1996), 1-22. doi: 10.3934/dcds.1996.2.1. Google Scholar F. H. Lin, J. Y. Lin and C. Y. Wang, Liquid crystal flows in two dimensions, Arch. Rational Mech. Anal., 197 (2010), 297-336. doi: 10.1007/s00205-009-0278-x. Google Scholar F.-H. Lin and C. Y. Wang, On the uniqueness of heat flow of harmonic maps and hydrodynamic flow of nematic liquid crystals, Chin. Ann. Math. Ser. B, 31 (2010), 921-938. doi: 10.1007/s11401-010-0612-5. Google Scholar F. H. Lin and C. Y. Wang, Global existence of weak solutions of the nematic liquid crystal flow in dimension three, Comm. Pure Appl. Math., 69 (2016), 1532-1571. doi: 10.1002/cpa.21583. Google Scholar Q. Liu, S. Q. Liu, W. K. Tan and X. Zhong, Global well-posedness of the 2D nonhomogeneous incompressible nematic liquid crystal flows, J. Differential Equations, 261 (2016), 6521-6569. doi: 10.1016/j.jde.2016.08.044. Google Scholar [42] P. Oswald and P. Pieranski, Nematic and Cholesteric Liquid Crystals: Concepts and Physical Properties Illustrated by Experiments, CRC Press, 2005. doi: 10.1201/9780203023013. Google Scholar V. A. Solonnikov, $L_p$-estimates for solutions to the initial boundary-value problem for the generalized Stokes system in a bounded domain, J. Math. Sci. (New York), 105 (2001), 2448-2484. doi: 10.1023/A:1011321430954. Google Scholar A. M. Sonnet and E. G. Virga, Dissipative Ordered Fluids: Theories for Liquid Crystals, Springer, New York, 2012. doi: 10.1007/978-0-387-87815-7. Google Scholar M. Struwe, On the evolution of harmonic maps of Riemannian surfaces, Commun. Math. Helv., 60 (1985), 558-581. doi: 10.1007/BF02567432. Google Scholar C. Y. Wang, Well-posedness for the heat flow of harmonic maps and the liquid crystal flow with rough initial data, Arch. Rational Mech. Anal., 200 (2011), 1-19. doi: 10.1007/s00205-010-0343-5. Google Scholar M. Wang and W. D. Wang, Global existence of weak solution for the 2-D Ericksen-Leslie system, Calc. Var. Partial Differential Equations, 51 (2014), 915-962. doi: 10.1007/s00526-013-0700-y. Google Scholar X. Xu and Z. F. Zhang, Global regularity and uniqueness of weak solutions for the 2-D liquid crystal flows, J. Differential Equations, 252 (2012), 1169-1181. doi: 10.1016/j.jde.2011.08.028. Google Scholar Chun Liu, Huan Sun. On energetic variational approaches in modeling the nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 455-475. doi: 10.3934/dcds.2009.23.455 Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348 Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002 Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115 Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392 Feimin Zhong, Jinxing Xie, Yuwei Shen. Bargaining in a multi-echelon supply chain with power structure: KS solution vs. Nash solution. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020172 Yi An, Bo Li, Lei Wang, Chao Zhang, Xiaoli Zhou. Calibration of a 3D laser rangefinder and a camera based on optimization solution. Journal of Industrial & Management Optimization, 2021, 17 (1) : 427-445. doi: 10.3934/jimo.2019119 Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033 Kai Zhang, Xiaoqi Yang, Song Wang. Solution method for discrete double obstacle problems based on a power penalty approach. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021018 Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159 Yoichi Enatsu, Emiko Ishiwata, Takeo Ushijima. Traveling wave solution for a diffusive simple epidemic model with a free boundary. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 835-850. doi: 10.3934/dcdss.2020387 Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016 Ali Mahmoodirad, Harish Garg, Sadegh Niroomand. Solving fuzzy linear fractional set covering problem by a goal programming based solution approach. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020162 Jiangtao Yang. Permanence, extinction and periodic solution of a stochastic single-species model with Lévy noises. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020371 Ahmad El Hajj, Hassan Ibrahim, Vivian Rizik. $ BV $ solution for a non-linear Hamilton-Jacobi system. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020405 Yan'e Wang, Nana Tian, Hua Nie. Positive solution branches of two-species competition model in open advective environments. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021006 Bingyan Liu, Xiongbing Ye, Xianzhou Dong, Lei Ni. Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021016 2019 Impact Factor: 1.27 HTML views (295) Dongfen Bian Yao Xiao
CommonCrawl
Desulfurization Model Using Solid CaO in Molten Ni-Base Superalloys Containing Al Yuki Kishimoto1,2, Satoshi Utada3, Taketo Iguchi1,2, Yuhi Mori2, Makoto Osawa2, Tadaharu Yokokawa2, Toshiharu Kobayashi2, Kyoko Kawagishi2, Shinsuke Suzuki1 & Hiroshi Harada2 Metallurgical and Materials Transactions B volume 51, pages293–305(2020)Cite this article Details of the desulfurization for molten Ni-base superalloys containing Al using solid CaO have been investigated, and the formula that explains the reaction rate has been developed. A cylindrical CaO rod was inserted into 500 g molten Ni-base superalloy TMS-1700 (MGA1700) containing 200 ppm S and held for a certain period at 1600 °C in each experiment. Sulfur content in the melt decreased with the increasing holding time of the CaO rod. Results of electron probe microanalysis show that Ca, O, S, and Al distribute in the same part of the melt/CaO interface as well as the particle boundaries of the CaO rods. The distribution of these elements suggests that CaO reacted with S in the melt to generate CaS, and Al reacted with O and CaO to form calcium aluminate slag. The desulfurization rate formula was obtained by the assumption that the rate-controlling process of the desulfurization is S diffusion through the generated layer composed of CaS and calcium aluminate slag. This formula expresses the amount of S in the melt by the diffusion term with the effective diffusion coefficient, which was obtained from the experimental results. Moreover, the time required for the desulfurization of 2 kg molten Ni-base superalloy PWA1484 using a CaO crucible, was calculated by this desulfurization rate formula which resulted in fair agreement with the actual result. A : Area of the melt/CaO interface (m2) \( c \) : Content of S in the melt (g/m2) \( c_{1} \) : Content of S in the generated layer at the interface with the melt (g/m3) Content of S in the generated layer at the end of the center side (g/m3) \( C_{\text{S,0}} \) : Initial S content in the melt (ppm) \( C_{\text{S,t}} \) : Content of S in the melt at time t (ppm) \( C_{\text{S, fin}} \) : Final content of S in the melt (= target S content) (ppm) D : Effective diffusion coefficient (m2/s) J : Flux of S (g/m2 s) l : Distance of S transfer (m) \( M_{\text{CaO}} \) : Chemical formula weight of CaO (g/mol) M S : Chemical formula weight of S (g/mol) t : Desulfurization time (s) Amount of transferred S (= amount of desulfurization) (g) \( W_{\text{Ni}} \) : Amount of the melt (g) \( \delta_{\text{t}} \) : Thickness of the homogeneously generated layer at time t (m) \( \rho_{\text{CaO}} \) : Density of CaO (g/m3) \( \rho_{\text{Ni}} \) : Density of the melt (g/m3) H. Harada, T. Tetsui, Y. Gu, J. Fujioka, K. Kawagishi, and K. Matsumoto: Journal of the Gas Turbine Society of Japan, 2013, vol. 41, pp.85–92. T. Matsui: Journal of the Gas Turbine Society of Japan, 2011, vol. 39, pp. 261–266. P. Caron: Superalloys, 2000, pp. 737–46. K. Kawagishi, A. Yeh, T. Yokokawa, T. Kobayashi, Y. Koizumi, and H. Harada: Superalloys, 2012, pp. 189–195. J. J. de Barbadillo: Metall. Trans. A, 1983, vol. 14A, pp. 329–341. R. R. Srivastava, M. Kim, J. Lee, M. K. Jha, and B. Kim: Journal of Materials Science, 2014, vol. 49, pp. 4671–4686. S. Utada, Y. Joh, M. Osawa, T. Yokokawa, T. Kobayashi, K. Kawagishi, S. Suzuki, and H. Harada: Proceedings of the Inter- 580 national Gas Turbine Congress, 2015, pp. 1039–43. S. Utada, Y. Joh, M. Osawa, T. Yokokawa, T. Kobayashi, K. Kawagishi, S. Suzuki, and H. Harada: Superalloys, 2016, pp. 591–99. J. X. Dong, X. S. Xie, and R. G. Thompson: Metallurgical and Materials Transactions A, 2000, vol. 31A, pp. 2135–2144. Y. Joh, S. Utada, M. Osawa, T. Kobayashi, T. Yokokawa, K. Kawagishi, S. Suzuki, and H. Harada: Materials Transactions, 2016, vol. 57, pp. 1305–1308. S. Utada, Y. Joh, M. Osawa, T. Yokokawa, T. Sugiyama, T. Kobayashi, K. Kawagishi, S. Suzuki, and H. Harada: Metallurgical and Materials Transactions A, 2018, vol. 49, pp. 4029-4041. D. W. Yun, S. M. Seo, H.W. Jeong, and Y. S. Yoo: Corrosion Science 90, 2015, pp. 392–401. H. J. Grabke, D. Wiemer, and H.Viefhaus: Applied Surface science, 1991, vol. 47, pp. 243–250. M. A. Smith, W. E. Frazier, and B. A. Pregger: Mater. Sci. Eng. A, 1995, vol. A203, pp. 388-398. C. Sarioglu, C. Stinner, J.R. Blachere, N. Birks, F.S. Pettit, G.H. Meier, and J. L. Smialek: Superalloys, 1996, pp. 71–81. Y. Joh, T. Kobayashi, T. Yokokawa, K. Kawagishi, M. Osawa, S. Suzuki, and H. Harada: 10th Liege Conf.: Materials for Advanced Power Engineering, 2014, pp. 538–44. T. Ototani, Y, Kataura, and T, Degawa: Tetsu-to-Hagané, 1975, vol. 61, pp. 2167–2181. J. C. Niedringhaus and R. J. Fruehan: Metallurgical Transactions B, 1988, vol. 19B, pp. 261–268. T. Tanaka, Y. Ogiso, M. Ueda, and J. Lee: ISIJ Int., 2010, vol. 50, pp. 1071-1077. K. Takahashi, K. Utagawa, H. Shibata, S. Kitamura, N. Kikuchi, and Y. Kishimoto: ISIJ Int., 2012, vol. 52, pp. 10-17. T. Degawa and T, Ototani: Tetsu-to-Hagané, 1987, vol. 73, pp. 1684–1690. R. J. Fruehan: Metallurgical Transactions B, 1978, vol. 9B, pp. 287–292. A. Saelim and D.R. Gaskell: Metallurgical Transaction B, 1983, vol. 14B, pp. 259–266. T. Degawa,T. Ototani: Tetsu-to-Hagané, 1987 vol. 73, pp. 1691–1697. J. Niu, K. Yang, T. Jin, X. Sun, H. Guan, and Z. Hu: Journal of Materials Science & Technology, 2003, vol. 19, pp. 69–72. V. V. Sidorov and P. G. Min: Russian Metallurgy, 2014, vol. 2014, pp. 987–991. V. V. Sidorov, V. E. Rigin, P. G. Min, and Y. I. Folomeikin: Russian Metallurgy, 2015, vol. 2015, pp. 910–915. T. Sugiyama, S. Utada, T. Yokokawa, M. Osawa, K. Kawagishi, S. Suzuki, and H. Harada: Metallurgical and Materials Transactions A, 2019, vol. 50, pp. 3903-3911. K. Kawagishi, T. Yokokawa, T. Kobayashi, Y. Koizumi, M. Sakamoto, M. Yuyama, H. Harada, I. Okada, M. Taneike, and H. Oguma: Superalloys, 2016, pp. 115–22. K. Kawagishi, H. Harada, T. Yokokawa, Y. Koizumi, T. Kobayashi, M. Sakamoto, M. Yuyama, M. Taneike, I. Okada, S. Shimohata, H. Oguma, R. Okimoto, K. Tsukagoshi, Y. Uemura, J. Masada, and S. Torii: Patent WO2014024734A1, 2013, WPIO/PCT. A.D. Cetel and D.N. Duhl: Superalloys, 1988, pp. 235–44. B. Hallstedt: Journal of the American Ceramic Society, 1990, vol. 73, pp. 15–23. The authors are grateful to Mr. D. Kaneko (Meijun Gijutsu Shien) for the designing and development of the induction melting furnace specified for this experiment. I.mecs Co., Ltd is thanked for their precise work on the condition adjustment of the induction heating and for their continuous support. We thank TOUHOKU SOKKI CORP for organizing the development of the experimental apparatus. We also thank Drs. S. Kawada, and A. Iwanade, Materials Analysis Station, NIMS, for chemical analysis. Dr. T. Degawa is thanked for his achievement on the field of desulfurization process and for providing his book that helped the first study of the present research. This research was financially supported by Japan Science and Technology Agency (JST), under the Advanced Low Carbon Technology Research and Development Program (ALCA) project: "Development of Direct and Complete Recycling Method for Superalloy Turbine Aerofoils. (JPMJAL1302)". Toshiharu Kobayashi—deceased on January 27, 2019. Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo, 169-8555, Japan Yuki Kishimoto , Taketo Iguchi & Shinsuke Suzuki National Institute for Materials Science (NIMS), 1-2-1 Sengen, Tsukuba, Ibaraki, 305-0047, Japan , Yuhi Mori , Makoto Osawa , Tadaharu Yokokawa , Toshiharu Kobayashi , Kyoko Kawagishi & Hiroshi Harada Institute Pprime and SAFRAN Aircraft Engines, Téléport 2, 1 Avenue Clément Ader, 86360, Chasseneuil-du-Poitou, France Satoshi Utada Search for Yuki Kishimoto in: Search for Satoshi Utada in: Search for Taketo Iguchi in: Search for Yuhi Mori in: Search for Makoto Osawa in: Search for Tadaharu Yokokawa in: Search for Toshiharu Kobayashi in: Search for Kyoko Kawagishi in: Search for Shinsuke Suzuki in: Search for Hiroshi Harada in: Correspondence to Yuki Kishimoto. Manuscript submitted April 3, 2019. Derivation of Desulfurization Rate Formula The derivation of the desulfurization rate formula [3] is explained as follows. The Fick's first law is $$ \begin{array}{*{20}c} {J = - D\frac{{\text{d}c}}{{\text{d}l}}} \\ \end{array} $$ The continuous equation can be expressed using flux J, as $$ \begin{array}{*{20}c} {\frac{{{\text{d}}w}}{{{\text{d}}t}} = {JA}} \\ \end{array} $$ Thus, $$ \begin{array}{*{20}c} {\frac{{{\text{d}}w}}{{{\text{d}}t}} = - {DA}\frac{{{\text{d}}c}}{{{\text{d}}l}}} \\ \end{array} $$ The content gradient d\( c \)/dx can be assumed by the equation: $$ \begin{array}{*{20}c} {\frac{{{\text{d}}c}}{{{\text{d}}l}} = \frac{{c_{ 1} - c_{ 2} }}{{\delta_{\text{t}} }}} \\ \end{array} $$ Using the amount of S in the melt at time t, \( w_{\text{t}} \) and the initial amount of S in the melt \( w_{ 0} \), the amount of transferred S can be written in the equation of the content gradient dc/dl, as $$ \begin{array}{*{20}c} {\frac{\text{d}}{{{\text{d}}t}}\left( {w_{\text{t}} - w_{ 0} } \right) = - {DA}\frac{{c_{ 1} - c_{ 2} }}{{\delta_{\text{t}} }}} \\ \end{array} $$ It can be assumed that the content of S \( c_{ 1} \) = constant and \( c_{ 2} \) = 0. The thickness of the generated layer \( \delta_{\text{t}} \) can be expressed using content of S in the melt at time t by forming the molar amount of CaO and S reacted (when x = 3) as $$ \begin{array}{*{20}c} {\frac{{\rho_{\text{CaO}} \delta_{\text{t}} A}}{{M_{\text{CaO}} }} = 2\frac{{W_{\text{Ni}} \left( {C_{\text{S,0}} - C_{\text{S,t}} } \right) \times 1 0^{ - 6} }}{{M_{\text{S}} }}} \\ \end{array} $$ and solving for \( \delta_{\text{t}} \) gives $$ \begin{array}{*{20}c} {\delta_{\text{t}} = \frac{{ 2M_{\text{CaO}} W_{\text{Ni}} }}{{\rho_{\text{CaO}} M_{\text{S}} A}}\left( {C_{\text{S,0}} - C_{\text{S,t}} } \right) \times 1 0^{ - 6} } \\ \end{array} $$ The amount of S \( w_{t} \) and \( w_{ 0} \) can be rewritten using the content of sulfur \( C_{\text{S, t}} \), as $$ \begin{array}{*{20}c} {w_{\text{t}} = W_{\text{Ni}} C_{\text{S,t}} \times 1 0^{ - 6} } \\ \end{array} $$ $$ \begin{array}{*{20}c} {w_{ 0} = W_{\text{Ni}} C_{\text{S,0}} \times 1 0^{ - 6} } \\ \end{array} $$ Furthermore, it can be assumed that the content of S \( c_{ 1} \) was equal to the finally remaining content of S in the melt, as $$ \begin{array}{*{20}c} {c_{ 1} = \rho_{\text{Ni}} C_{\text{S,fin }} \times 1 0^{ - 6} } \\ \end{array} $$ (A10) Rewriting Eq. [A5] gives $$ \begin{array}{*{20}c} {\frac{{{\text{d}}C_{\text{s,t}} }}{{{\text{d}}t}} = - \frac{ 1}{ 2}\frac{{\rho_{\text{CaO}} \rho_{\text{Ni }} M_{\text{S }} A^{ 2} C_{\text{S, fin }} D}}{{W_{\text{Ni}}^{ 2} M_{\text{CaO}} \left( {C_{\text{S,0}} - C_{\text{S,t}} } \right)}} \times 1 0^{ 6} } \\ \end{array} $$ Thus, the content of S in the melt \( C_{\text{S,t}} \) can be expressed as $$ \begin{array}{*{20}c} {C_{\text{S,t}} = C_{\text{S,0}} - \sqrt {\frac{{\rho_{\text{CaO}} \rho_{\text{Ni}} M_{\text{S }} A^{ 2} C_{\text{S, fin }} D t }}{{W_{\text{Ni}}^{ 2} M_{\text{CaO}} }} \times 1 0^{ 6} } } \\ \end{array} $$ Kishimoto, Y., Utada, S., Iguchi, T. et al. Desulfurization Model Using Solid CaO in Molten Ni-Base Superalloys Containing Al. Metall and Materi Trans B 51, 293–305 (2020). https://doi.org/10.1007/s11663-019-01716-8 Issue Date: February 2020
CommonCrawl
EXPONENTIAL MODELS All Answered Unanswered Exponential function questions and answers Recent questions in Exponential models Ben Shaver 2022-01-06 Answered Which set of ordered pairs could be generated by an exponential function? A. (1, 1) (2, 1/2) (3, 1/3) (4, 1/4) B. (1, 1) (2, 1/4) (3, 1/9) (4 1/16) C. (1, 1/2) (2, 1/4) (3, 1/8) (4, 1/16) D. (1, 1/2) (2, 1/4) (3, 1/6) (4, 1/8) zakinutuzi 2021-12-31 Answered Furthermore why is it that \(\displaystyle{e}^{{x}}\) is used in exponential modelling? Why aren't other exponential functions used, i.e. \(\displaystyle{2}^{{x}}\), etc? David Lewis 2021-12-30 Answered I have a real valued number \(\displaystyle{y}_{{t}}\). At each time step t, \(\displaystyle{y}_{{t}}\) is multiplied by \(\displaystyle{\left({1}+\epsilon\right)}\) with probability p and multiplied by \(\displaystyle{\left({1}-\epsilon\right)}\) with probability \(\displaystyle{1}-{p}\). What is the expected value of \(\displaystyle{y}_{{{t}+{n}}}\)? What is the variance? Ikunupe6v 2021-12-21 Answered What is the difference between linear growth and exponential growth? Patricia Crane 2021-12-15 Answered Find the derivative of \(\displaystyle{y}={e}^{{{5}{x}}}\) Inyalan0 2021-12-09 Answered Solve \(\displaystyle{e}^{{{x}}}={0}\) Julia White 2021-12-09 Answered Differentiate, please: \(\displaystyle{y}={x}^{{{x}}}\). Determine whether the given function is linear, exponential, or neither. For those that are linear functions, find a linear function that models the data. for those that are exponential, find an exponential function that models the data. \(\displaystyle{x}{F}{\left({x}\right)}-{1}{\left(\frac{1}{{2}}\right)}{0}{\left(\frac{1}{{4}}\right)}{1}{\left(\frac{1}{{8}}\right)}{2}{\left(\frac{1}{{16}}\right)}{3}\frac{1}{{32}}\) Lennie Carroll 2021-09-20 Answered A researcher is trying to determine the doubling time fora population of the bacterium Giardia lamblia. He starts a culture in a nutrient solution and estimates the bacteria count every four hours. His data are shown in the table. ​ Time (hours)Bacteria count (CFU/mL)0374478631278161052013024173 ​ Time (hours) 0 4 8 12 16 20 24 ​ Bacteria count (CFU/mL) 37 47 63 78 105 130 173 ​ Use a graphing calculator to find an exponential curve \(f(t)=a*b^t\) that models the bacteria population t hours later. Suman Cole 2021-09-20 Answered Determine whether the given function is linear, exponential, or neither. For those that are linear functions, find a linear function that models the data. for those that are exponential, find an exponential function that models the data. xg(x) -12 05 18 211 314 avissidep 2021-09-18 Answered James rents an apartment with an initial monthly rent of $1,600. He was told that the rent goes up 1.75% each year. Write an exponential function that models this situation to calculate the rent after 15 years. Round the monthly rent to the nearest dollar. he298c 2021-09-16 Answered A researcher is trying to determine the doubling time of a population of the bacterium Giardia lamblia. He starts a culture in a nutrient solution and estimates the bacteria count every four hours. His data are shown in the table. Use a graphing calculator to find an exponential curve f(t)=a⋅bt that models the bacteria population t hours later. Time (hours) 04812162024 Bacteria count (CFU/mL)37476378105130173 DofotheroU 2021-09-13 Answered The exponential growth models describe the population of the indicated country, A, in millions, t years after 2006. \(\displaystyle{A}={33.1}{e}^{{0.009}}{t}\) Uganda \(\displaystyle{A}={28.2}{e}^{{0.034}}{t}\) Use this information to determine whether each statement is true or false. If the statement is false, make the necessary change(s) to produce a true statement. Uganda's growth rate is approximately 3.8 times that of Canada's. Determine whether the given function is linear, exponential, or neither. For those that are linear functions, find a linear function that models the data. for those that are exponential, find an exponential function that models the data. $$\begin{array} \text{x} & \text{g(x)}\ \hline \text{−1−1} & \text{3}\ \text{0} & \text{6}\ \text{1} & \text{12}\ \text{2} & \text{18}\ \text{3} & \text{30}\ \end{array}$$ Mylo O'Moore 2021-08-16 Answered The following question consider the Gompertz equation, a modification for logistic growth, which is often used for modeling cancer growth, specifically the number of tumor cells. When does population increase the fastest in the threshold logistic equation \(\displaystyle{P}'{\left({t}\right)}={r}{P}{\left({1}-{\frac{{{P}}}{{{K}}}}\right)}{\left({1}-{\frac{{{T}}}{{{P}}}}\right)}?\) Khadija Wells 2021-08-13 Answered The table shows the annual service revenues R1 in billions of dollars for the cellular telephone industry for the years 2000 through 2006. \(\begin{matrix} Year&2000&2001&2002&2003&2004&2005&2006\\ R_1&52.5&65.3&76.5&87.6&102.1&113.5&125.5 \end{matrix}\) (a) Use the regression capabilities of a graphing utility to find an exponential model for the data. Let t represent the year, with t=10 corresponding to 2000. Use the graphing utility to plot the data and graph the model in the same viewing window. (b) A financial consultant believes that a model for service revenues for the years 2010 through 2015 is \(\displaystyle{R}{2}={6}+{13}+{13},{9}^{{0.14}}{t}\). What is the difference in total service revenues between the two models for the years 2010 through 2015? Phoebe 2021-08-11 Answered Find The Exponential Function \(\displaystyle{f{{\left({x}\right)}}}={a}^{{x}}\) Whose Graph Is Given. nicekikah 2021-08-08 Answered For the following exercises, use a graphing utility to create a scatter diagram of the data given in the table. Observe the shape of the scatter diagram to determine whether the data is best described by an exponential, logarithmic, or logistic model. Then use the appropriate regression feature to find an equation that models the data. When necessary, round values to five decimal places. \(\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline x & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ \hline \hline f(x) & 13.98 & 17.84 & 20.01 & 22.7 & 24.1 & 26.15 & 27.37 & 28.38 & 29.97 & 31.07 & 31.43 \\ \hline \end{array}\) illusiia 2021-08-06 Answered Graph each function and tell whether it represents exponential growth, exponential decay, or neither. \(\displaystyle{y}={\left({2.5}\right)}^{{{x}}}\) Line 2021-07-04 Answered Transform the given differential equation or system into an equivalent system of first-order differential equations. \(\displaystyle{x}{''}+{2}{x}'+{26}{x}={82}{\cos{{4}}}{t}\) Limits and continuity Integrals Analyzing functions
CommonCrawl
Who is our nearest planetary neighbor, on average? Assume that the planets have circular orbits centered on the sun. Assume that the radius of the orbits is 0.39, 0.723, 1, and 1.524 for Mercury, Venus, Earth, and Mars. Assume also that there are no resonances, in other words, that for a given position of planet A planet B will be in every other position over a long period of time. What is the closest planet to Earth on average? mathematics physics asked Mar 13 '19 at 4:53 Dr XorileDr Xorile $\begingroup$ Though this is a somewhat mathematical puzzle, the answer is a little surprising... $\endgroup$ – Dr Xorile $\begingroup$ I assume you mean the closest planet besides Earth? $\endgroup$ – Deusovi ♦ $\begingroup$ Didn't I see someone promoting their paper on this today? $\endgroup$ – Jay $\begingroup$ @Deusovi It's not in the body of the question, but "neighbour" in the title clearly excludes the Earth. $\endgroup$ – yo' $\begingroup$ A great puzzle for pi day $\endgroup$ – Strawberry The closest planet to Earth on average is: The other answers didn't give any calculations, so I'll provide some numbers. Hopefully they are correct! As other answers suggested, we can leave Earth stationary and just have the other planets do their orbits. Actually we only need to do half an orbit, because the other half will be exactly like the first half and not change the average in any way. By law of cosines, we can find the distance between the Earth and another planet by looking at the triangle that is formed when you connect the Earth with the other planet and the Sun. Obviously the distance to the Sun is the radius of the orbits and the angle will be $\theta$. The distance between the planets is then $\sqrt{r^2+R^2-2*r*R*cos(\theta)}$ where $r$ is the radius of Earth's orbit and $R$ is the radius of the other planet's orbit and $\theta$ is the angle between them. Now just find the integral as $\theta$ goes from 0 to $\pi$ and divide by $\pi$. Earth to Mars: $\frac{\int_0^\pi \sqrt{1^2+1.524^2-2*1*1.524*cos(\theta)} d\theta}{\pi} = 1.693AU$ Earth to Venus: Earth to Mercury: $\frac{\int_0^\pi \sqrt{1^2+0.39^2-2*1*.0.39*cos(\theta)} d\theta}{\pi} = 1.038AU$ AmorydaiAmorydai $\begingroup$ The key in this calculation is "divide by pi". Here you are assuming a uniform distribution of theta. That is, you assume the probability of the other planet being in any other theta is equal, and then you take the expected value. Depending on the other planets' theta as a function of time, this need not be true. So here we are making the simplifying assumption that all planets travel at a constant speed. $\endgroup$ – darksky $\begingroup$ @darksky Well, we are assuming the orbits are circles, so we are leaving Kepler out of it! Or, rather, we are leaving Kepler in it - "equal areas during equal intervals of time" in a circle would mean constant speed. $\endgroup$ – Amorydai $\begingroup$ @darksky Yep, constant speed and circular orbits. Both are not true, but the corrections would be much less than 7% difference between Mercury and Venus $\endgroup$ $\begingroup$ @Amorydai, how did you calculate the integrals? $\endgroup$ $\begingroup$ How very dare you use the word "Obviously"?! When did puzzling start allowing witchcraft in answers? That formula is satanic... I mean, what sort of wizard-y demon-scratch is that un-figurable? Let's have a vote on a site-wide ban on pomped up NASA geniuses... (in English: "This made me feel dumb") $\endgroup$ – Brent Hackers I must admit, I'm a bit rusty at calculus. So here's an attempt at an answer free of calculations, but with some more visual reasoning. First, let's draw out the orbits of the planets. Because there is no resonance, let's assume Earth is still and they all rotate at the same speed. Also, for reference we'll draw a circle of distance 1 AU around Earth. Now, notice that: I've drawn a few dotted lines here. If we imagine these circles as pie charts, the left part of the charts represent the time spent more than 1 AU away from the Earth. Let's plot the distance away from the Earth over time, which looks vaguely like this: Here, you should note: The arrows along the bottom, telling you the time spent above the green line (1AU still) and the arrows in the middle, telling you that for the Venus and Mercury, (maximum distance - 1AU) is equal to (1AU - minimum distance). Also, Mars is clearly out of the question. Goodbye Mars. It's between Mercury and Venus. From calculus or intuition, we know that the average distance of the planet is proportional to the area under the curve. And now it gets a bit non-technical. The area under the curve is equal to 2π AU·rad + (bit above the curve) - (bits below the curve). But the bit above the curve is approximately the same shape as the bit below the curve (if we shift the right bit under the curve to the left side of the graph), and since they are the same height their area is probably proportional to their width. And since Venus' width of bit above the curve to width of bit below the curve ratio is bigger than that of Mercury, and the fact that those bits are taller than Mercury's bits, I estimate Venus' total area is probably more than that of Mercury's. So my guess is that: Mercury is on average closest to the Earth. (I'd love to know how accurate this argument is, but that maths is beyond me.) NB: Click on images for slightly higher quality if they're a bit fuzzy. boboquackboboquack $\begingroup$ Heck, I so love this one. If you wanna teach mathematical/physical intuition, this is the way! $\endgroup$ $\begingroup$ Love this answer. Super helpful for gaining the intuition. $\endgroup$ $\begingroup$ One of my favorite answers in this whole site. I would actually recommend reading this before reading the accepted answer so it's easier to digest the technical math. $\endgroup$ – yushi Reasoning: The average position of Earth (and indeed all planets in a circular orbit) is the middle of the Sun. Since Mercury's orbit is closest to the sun, it's the nearest on average to the Earth, and indeed all the other planets. Matthew BarberMatthew Barber $\begingroup$ Following that reasonning, wouldn't the answer be "All planets are equally close to the Earth on average, as all of their average positions fall at the same place (middle of the Sun)" ? $\endgroup$ – Soltius $\begingroup$ By this logic, the average closest planet to our moon is also Mercury. $\endgroup$ – BlueRaja - Danny Pflughoeft $\begingroup$ @BlueRaja That's only true if you assume that the orbits of the Moon and the Earth around the sun are independent, which they obviously aren't. $\endgroup$ – Matthew Barber $\begingroup$ @Soltius No, that does not follow, as you cannot calculate average distances entirely from average positions. Averaging the position of the Earth won't help you calculate the distance either. It's just something that illustrates why the distance from the sun of the other planet is the only thing that matters in the ordering. $\endgroup$ Others have done all the necessary calculations, so here's some hairy maths. I assume as per the question that all orbits are circular and that the planets move in such a way that the average distance equals the average over all angular differences. Then it turns out that the average distance from earth to a planet whose orbit has radius $r$ astronomical units (i.e., $r$ times the radius of the earth's orbit) is $\frac2\pi(1+r)E(\frac{4r}{1+r^2})$ astronomical units, where E is the so-called complete elliptic integral of the second kind, what Mathematica calls EllipticE. So what we'd like to be true is that this is an increasing function of $r$. This does appear to be true, but proving it is not so trivial. Rather than looking at the average over the whole orbit, let's look at just two antipodal points. So, suppose the angle between earth's position and the other planet's position is $\theta$, so that the distance is $\sqrt{(r-\cos\theta)^2+\sin^2\theta}$; half-way around the orbit the other planet's position is $\theta+\pi$ and the distance is $\sqrt{(r+\cos\theta)^2+\sin^2\theta}$. The sum of these is $f(r,\theta):=\sqrt{(r-\cos\theta)^2+\sin^2\theta}+\sqrt{(r+\cos\theta)^2+\sin^2\theta}$, our average is the average of this over all values of $\theta$, and it will be an increasing function of $r$ if $f$ is for every $\theta$. This will be true if it's true when we consider instead $g(r,u,v):=\sqrt{(r-u)^2+v^2}+\sqrt{(r+u)^2+v^2}$ and allow $u,v$ to take any value at all. (Which just corresponds to letting the earth's distance from the sun be something other than 1 unit.) The derivative of this thing is $\frac{r+u}{\sqrt{(r+u)^2+v^2}}+\frac{r-u}{\sqrt{(r-u)^2+v^2}}$. Obviously this is positive when $r>u$. When $r<u$ it's $h(u+r,v)-h(u-r,v)$ where $h(p,q)=\frac{p}{\sqrt{p^2+q^2}}=\cos\tan^{-1}\frac qp$. But this is obviously a decreasing function of $q/p$, hence an increasing function of $p$, which means that $h(u+r,v)>h(u-r,v)$, which means that $\frac{\partial g}{\partial r}>0$, which means that $\frac{\partial f}{\partial r}>0$, which means that $\frac{\partial\int f}{\partial r}>0$, which means that indeed the average distance is an increasing function of $r$. I suspect there may be an easier more purely geometrical way to do this. Gareth McCaughan♦Gareth McCaughan $\begingroup$ @boboquack gets at a nice intuition for why this is true. I think also looking at the pairs of points: in line with the earth they balance out (so 0 difference) and at right angles the further out the orbit the further out the distance. So a continuity argument says that the overall average is monotonic. $\endgroup$ Fix the position of the Earth, and let the planets move in orbit. We want the average distance. If the Sun was an object to consider, the radius R would be the average. If there was another planet on Earth's orbit, it's average would be greater than R, as most of the orbit is at a further distance than R from the Earth (draw a circle radius R from the Earth - it cuts the other orbit before the halfway points). As this is a continuous and monotonic increasing function, the planet closest on average is Mercury. JMPJMP $\begingroup$ Why is it continuous and monotonic? And how did you deal with Mars? $\endgroup$ – boboquack $\begingroup$ @boboquack; the orbit moves continuously and so therefore does the average function, which is a quadratic, and therefore has a monotonic differential. Larger orbits just get bigger (monotonic increasing remember!). Also see en.wikipedia.org/wiki/Orbit. $\endgroup$ – JMP $\begingroup$ Average distance to earth is not a quadratic function of orbit radius because for very large orbits it's approximately equal to the radius. $\endgroup$ – Gareth McCaughan ♦ $\begingroup$ @GarethMcCaughan; this doesn't change my argument much though. the average function depends on R and only approximates R locally. $\endgroup$ $\begingroup$ The function in question is monotone increasing, though. At least, I think it is, though I haven't tried to prove it; it's a pretty ugly function involving elliptic integrals. $\endgroup$ Here is another way to deduce the same result without any calculations. First the inner planets. Imagine that instead of the Earth, there is a wall at 1 AU from the sun. The average distance of any inner planet's orbit to the wall is easily seen to be exactly 1 AU. This is because you can pair up points on the left and right sides of the orbit. The average distance of those two points to the wall is the same as the distance of their midpoint to the wall. If you now go back to measuring the distance to the Earth itself, the distances get larger. Crucially, the larger the orbit, the higher the slopes of the line segments we are measuring, and the further away the average distance is from 1 AU. This shows that amongst the inner planets, the innermost planet (Mercury) has the smallest average distance to Earth. What about the outer planets? Let's go back to the wall replacing the Earth. If an planet's orbit crosses the wall, then when it comes to measuring its distance, we might as well mirror the planet's position in the wall and measure the distance to its mirror image. When we then do the same trick of pairing points in the two halves of the orbit, it is clear that the average distance to the wall becomes greater than 1 AU to start with. Combine that with the fact that when measuring the distance to Earth itself the slopes of the line segments are even larger than before, it is clear that the average distance to the planet is even greater compared to the inner planets. Jaap ScherphuisJaap Scherphuis I'd say the closest planet to Earth is Earth with an average distance of 0 Ahmed Ashour $\begingroup$ Well, unfortunately for you, a neighbour is a well defined notion in geometry, and excludes you yourself... $\endgroup$ $\begingroup$ @yo' I disagree. In graph theory this might be true, but if you're talking about metric spaces (which we are), then an epsilon neighborhood around a point always contains that point. $\endgroup$ – Santana Afton $\begingroup$ Well, in clustering theory, statistics etc. it's all pretty clear, and that's the context in which I see this puzzle. $\endgroup$ $\begingroup$ @SantanaAfton: a neighbourhood of a point in a topological/metric space contains the point itself, but you don't speak of points being neighbours of each other in that setting. (That said, I think everyday usage is more to the point here than any of the technical mathematical senses.) $\endgroup$ – Peter LeFanu Lumsdaine Forgive me for being late to the party, but I must nitpick all the other answers. Original question: "What is the closest planet to Earth on average?" All of you translated that into: "Which planet has the smallest average distance from Earth?" I think a better interpretation is: "Which planet is closest to Earth for the highest percentage of time?" That may not sound like a huge difference but from a statistical standpoint that sort of thing has consequences. It's like how a set of data can have a wildly different average, median, and mode. There's no bonus points for being in 2nd or 3rd place in my interpretation and that has the potential to throw the answer. Solution below: The answer turns out to be MARS! Nah, I'm messing with you. It's still Mercury. I have no idea how to solve this problem without computer aid. Wasn't too hard. I had to assume orbital velocity for every planet, but thanks to Sir Issac Newton it wasn't a guess. Mercury - 46.51% Venus - 36.72% Mars - 16.77% All other planets - 0% For the record, I'm not taking this very seriously. I do honestly think my interpretation is a little more accurate but it's not like I'd down-vote you guys or anything. Fun puzzle. Respect. Dark ThunderDark Thunder $\begingroup$ Love it. Nitpicking is the heart and soul of much of puzzling and gaming! +1 $\endgroup$ Not the answer you're looking for? Browse other questions tagged mathematics physics or ask your own question. Find a straight tunnel Seven Spheres of Unequal Mass, a weighing problem with a twist Math in Space (without the help of celebrities) Who drinks beer while running anyway? A Crucial Delivery Ranking And Average How long will my money last at roulette? Interplanetary blips and bleeps n rows and 18 columns
CommonCrawl
Modelling the return on investment of preventively vaccinating healthcare workers against pertussis Luqman Tariq1, 2, Marie-Josée J Mangen3, Anke Hövels1, Gerard Frijstein4 and Hero de Boer4Email author BMC Infectious Diseases201515:75 © Tariq et al.; licensee BioMed Central. 2015 Received: 18 July 2014 Healthcare workers (HCWs) are at particular risk of acquiring pertussis and transmitting the infection to high-risk susceptible patients and colleagues. In this paper, the return on investment (ROI) of preventively vaccinating HCWs against pertussis to prevent nosocomial pertussis outbreaks is estimated using a hospital ward perspective, presuming an outbreak occurs once in 10 years. Data on the pertussis outbreak on the neonatology ward in 2004 in the Academic Medical Center Amsterdam (The Netherlands) was used to calculate control costs and other outbreak related costs. The study population was: neonatology ward staff members (n = 133), parents (n = 40), neonates (n = 20), and newborns transferred to other hospitals (n = 23). ROI is presented as the amount of Euros saved in averting outbreaks by investing one Euro in preventively vaccinating HCWs. Sensitivity analysis was performed to study the robustness of the ROI. Results are presented at 2012 price level. Total nosocomial pertussis outbreak costs were €48,682. Direct control costs (i.e. antibiotic therapy, laboratory investigation and outbreak management control) were €11,464. Other outbreak related costs (i.e. sick leave of HCWs; restrictions on the neonatology ward, savings due to reduced working force required) accounted for €37,218. Vaccination costs were estimated at €12,208. The ROI of preventively vaccinating HCWs against pertussis was 1:4, meaning 4 Euros could be saved by every Euro invested in vaccinating HCWs to avert outbreaks. ROI was sensitive to a lower vaccine price, considering direct control costs only, average length of stay of neonates on the neonatology ward, length of patient uptake restrictions, assuming no reduced work force due to ward closer and presuming more than one outbreak to occur in 10 years' time. From a hospital ward perspective, preventive vaccination of HCWs against pertussis to prevent nosocomial pertussis outbreaks results in a positive ROI, presuming an outbreak occurs once in 10 years. Nosocomial outbreak Pertussis among healthcare workers (HCWs) is of special concern because of the potential for nosocomial exposure to susceptible patients and other HCWs [1]. HCWs are at particular risk of acquiring pertussis and may transmit the infection to young infants and colleagues [2]. Compared to the general adult population, HCWs are reported to have an almost 1.7-times higher risk of pertussis [3]. In literature, reports of nosocomial pertussis outbreaks following community or hospital exposures of HCWs are available [4-7]. Nosocomial outbreaks not only generate a considerable disease burden in humans, but can also result in substantial control costs and other outbreak related costs for hospitals. The type of expenses include diagnostic testing, provision of antibiotic treatment or prophylaxis, costs associated with furlough of employees, and time spent by occupational health infection control staff to track and identify exposed individuals, as well as costs associated with dissemination of information [2]. Previous studies estimating the nosocomial pertussis outbreak costs among HCWs concluded that these outbreaks resulted in serious adverse health and economic consequences to the hospitals, HCWs, patients and their families [1,5,8,9]. Using a hospital perspective, Ward et al., [5] estimated the total outbreak costs among HCWs at €55,5791 for 91 cases in a French hospital. Calugar et al., [1] calculated the total outbreak costs at €76,9451,2 for 17 cases in a hospital in the Unites States. From the hospitals' perspective, Baggett et al., [8] calculated costs of two hospital outbreaks in the United States at €114,526 and €248,998 s1,2, respectively. Zivna et al., [9] estimated the total outbreak costs to be €80,428 - €93,0881,2 in a tertiary care medical center in the United States. According to Calugar et al., [1] cost savings and benefits can be accrued by vaccinating HCWs against pertussis, with benefits for the hospital estimated at 2.38 times a dollar invested in vaccinating HCWs (USD 2004 estimate). Therefore, prevention of nosocomial pertussis outbreaks by preventively vaccinating HCWs can be beneficial and has the potential to reduce the overall disease and economic burden of pertussis. In the Netherlands, pertussis vaccination was introduced in 1952 with long-established high vaccination coverage of 96-97% [10]. Dutch infants are vaccinated against pertussis on the age of 2, 3, 4, 11 months, and 4 years in the National Immunization Program (NIP) [11], meaning in the first four months, infants are not fully protected against pertussis and disease occurs frequently, especially in years of high pertussis circulation (i.e. every 2–4 years) [10]. A national vaccination recommendation of HCWs has yet to be made in the Netherlands. Nosocomial pertussis outbreaks have occurred in the Netherlands in the past decade [12,13]. However, economic consequences of such pertussis outbreaks and the potential benefits of preventively vaccinating HCWs have not been evaluated. In this paper, we aim to calculate the return on investment (ROI) of preventively vaccinating HCWs against pertussis to prevent nosocomial pertussis outbreaks in a neonatology ward using a hospital ward perspective. Data on the nosocomial pertussis outbreak on the neonatology ward in the Academic Medical Center Amsterdam (AMC) in The Netherlands in the year 2004 (for details see Box A.1 in Additional file 1) were used as a case study to examine the economic impact of a pertussis outbreak in a neonatology ward. Data collection & study population During the outbreak period, data were collected by the occupational health service department of the AMC (hereafter referred as "AMC database") on all control measures undertaken related to newborns, their parents, staff members, and to the organization within the hospital. The study population consisted of: neonatology ward staff members (15 neonatologists, 100 nurses, 18 assistants), parents of newborns (20 fathers, 13 lactating mothers, 7 non-lactating mothers), neonates (20 infants) and parents of 23 newborns who were transferred to another hospital. As data from on the pertussis outbreak was used in an aggregated way without identifying the individual participant, no written informed consent and ethical approval were required from participants to perform the data analysis. The following assumptions were made in this study as the AMC database did not capture all data on the outbreak: Due to patient uptake restrictions (i.e. for a period of 10 days, no new patients were allowed to be admitted on the neonatology ward), we assumed that the following activities were performed during regular working hours (i.e. not resulting in additional costs for the hospital ward): telephone calls made to parents whose children were transferred to another hospital; time spent by the neonatologist working on controlling the outbreak; survey performed by the occupational health service department as this is normal procedure during an outbreak in the AMC; all drug and vaccination administrations; PCRs done on nasopharyngeal swabs and blood samples taken for serology; Based on average Dutch working population [14] we assumed that an average working week of staff members other than neonatologists consisted of 32 hours; neonatologists were assumed to work 42 hours/week [15]; No further transmission of the pertussis infection took place after restrictions on the patient uptake on the neonatology ward were lifted; Reduced work force was required to run the neonatology ward during the period of restrictions on the patient uptake. We assumed that on day 1, 2, 3 and 4 0%, 5%, 10% and 15% reduced work force was required, respectively. On day five and onwards, this assumption was set at 20%. Cost estimations Total outbreak costs were calculated by considering: direct control costs (i.e. (i) medical consumption costs containing antibiotics, (ii) laboratory investigation costs, (iii) outbreak control management costs) and other outbreak related costs (i.e. (iv) replacing costs for sick hospital staff members, (v) losses due to restrictions on patient uptake on the neonatology ward and (vi) savings due to a reduced work force required on the neonatology ward during patient uptake restriction period). Dutch prices were used to derive medication costs and other resource unit costs [15-18], and where necessary updated to 2012 using Dutch consumer price index (CPI) [14]. Vaccination costs were calculated by considering: catch-up vaccination: vaccination of all HCWs (n = 133) one year after the outbreak based on the list price for Infanrix® IPV (diphtheria, tetanus, acellular pertussis and inactivated poliomyelitis) vaccine (€34.50 per vial) [19,20]; vaccination of newly employed HCWs staff (assumption 10% per year) for a period of 10 years. New HCWs were assumed to be unvaccinated but would be vaccinated upfront when hired at 100% coverage rate; booster vaccination to be provided eight years after first vaccination due to the declining vaccine effectiveness [21]. ROI of preventively vaccinating HCWs was calculated by dividing the return on investment (i.e. averted outbreak costs, using the AMC outbreak costs as proxy) by the cost of the investment (i.e. cumulative vaccination costs including booster vaccination): $$ \mathrm{Return}\ \mathrm{On}\ \mathrm{Investment} = \left\{\frac{\mathrm{Averted}\ \mathrm{Outbreak}\ \mathrm{Costs}}{\mathrm{Vaccination}\ \mathrm{Costs}}\right\} $$ and it is presented as a ratio: the amount of Euros saved by averting an outbreak times one Euro invested in vaccinating HCWs. All costs are presented in Euro at 2012 price level and without time-discounting. Discounting is applied in sensitivity analysis. The analysis was conducted in MS Excel, version 2007 based on the study population and the input parameters displayed in Table 1. The outcome measures were: Study population and input parameters (all costs are expressed in 2012 Euros) Study population (n) Lactating mothers Non-lactating mothers Parents per child Average weight newborn (in kg) Medical consumption Erythromycin cost per vial (solution of 20 mg) Erythromycin cost per tablet Azithromycin cost per tablet Laboratory investigation Number of PCRs performed: PCR costs per unit Number of serological tests performed: Serological test cost per unit Outbreak control management Crisis meetings in the hospital Duration of a crisis meeting (in minutes) Personnel present at every crisis meeting: Neonatologists Amount of surgical masks used during the outbreak period Costs per unit surgical mask Replacing sick staff members Average working hours of nurses per week Number of staff members not able to work for three days after performing the PCR test. Assumed they were all nurses Number of hours of nurses absence due to the PCR test Number of staff members absent from the neonatology ward for one week. Assumed they were all nurses. Number of hours of nurses absence due to illness (i.e. sick leave) Restrictions patient uptake Regular occupation of the neonatology ward, patients per day Average length of stay of neonates in neonatology ward (in days) * & Personal communication Average number of patients admitted on the neonatology ward per day Length of patient restriction uptake on the neonatology ward (in days) Number of empty bed-days due to ward closure during the restriction period Cost per patient per day due to patient restriction Reduced workforce due to patient uptake restrictions Average number of nurses & assistant working/day in the neonatology ward Average number of consultant working/day in the neonatology ward Average number of neonatologists working/day in the neonatology ward Reduced working force due to ward closure: On day 1 On day 5 and onwards Reduced working hours due to ward closure Preventive vaccination Infanrix IPV® costs Staff members vaccinated Average number of new personal in neonatology ward /year (in %) Average number of new personnel in neonatology ward /year (absolute) Booster vaccination after years Tariff personnel The costs for the employer are higher than the tariffs paid to the employees, we therefore multiplied the costs per hour by Tariff per hour/nurses Tariff per hour/neonatologists Tariff per hour/others *During the outbreak period, these data were collected by the occupational health service department of the AMC. In this paper we named this information the AMC database. total nosocomial pertussis outbreak costs, split up as costs of antibiotics, costs of laboratory investigations, costs of outbreak control management, costs due to work absence of sick staff members, losses due to restrictions on patient uptake and reduced costs (=savings) due to reduced working force; ROI of preventively vaccinating HCW assuming one outbreak within 10 years' time. Univariate and two-way sensitivity analysis was performed on several input parameters to further test the robustness of the outcomes. In Table 2, the scenarios for the sensitivity analysis together with the values of input parameters are displayed. Amongst other variables, the impact of discounting future outbreak costs and vaccination costs on the ROI was estimated, using a discount rate of 4%, according to Dutch health economic guidelines [15]. Also, the number of pertussis outbreaks in a period was varied (i.e. once or twice in 10 years, once in 20 years). Scenarios in the univariate and two-way sensitivity analysis Range for sensitivity analysis Base case Average working days for nurses per week −1 day and + 1 day Average length of stay of neonates in neonatology ward 0.5 and 1.5 × base case Number of staff members not able to work for 3 days after performing the PCR test Average number of new personnel in neonatology ward /year (in %) No reduced working hours for nurses, neonatologists and other HCW due to ward closure Vaccine price Costs considered in the ROI - only direct control costs Undiscounted outbreak and vaccination costs with 2 outbreaks in 10 years Discounted outbreak and vaccination costs with 1 outbreak in 10 years Discounted outbreak and vaccination costs with 2 outbreaks in 10 years Undiscounted outbreak and vaccination costs with 1 outbreak in 20 years Smaller neonatology ward (HCW × 0,50 and ward occupation ×0,50) Bigger neonatology ward (HCW × 1,50 and ward occupation ×1,50) Length of patient restriction uptake on the neonatology ward (5 days) and average length of stay of neonates in neonatology ward (14 days) Total nosocomial pertussis outbreak costs in the AMC in the Netherlands were €48,682. Direct control cost account for less than 25%. The majority of the costs were caused due to patient uptake restrictions on the neonatology ward, including savings due to reduce working force (33%), and due to absenteeism of HCWs (43%). Medical consumption costs were €785, laboratory investigation costs accounted for €6,982, outbreak management control costs were €3,697, costs due to absenteeism were €21,008, and costs due to patient uptake restrictions were €16,210). Cumulative vaccination costs, including boostering, were €12,208. The return on investment of vaccinating HCWs was 1:4, meaning 4 Euros can be saved by investing one Euro in vaccinating HCWs to prevent a nosocomial pertussis outbreak (Table 3). Return on investment of preventively vaccinating healthcare workers against pertussis Costs in Euros Direct control costs Antibiotic therapy (a) Laboratory investigations (b) Outbreak control management (c) Total (a,b,c) Other outbreak related costs Absenteeism costs (d) Restrictions on patient uptake on the ward (e) Savings due to reduced staff costs (f) 1 −/− €30,826 Total (d,e,f) Total nosocomial pertussis outbreak costs (g) Vaccination costs (h) Return on investment ((g-h)/h) 1 Reduced work force was required due to ward closure, which led to savings in personnel costs. 2 (€47,036 + €-30,826)/€48,682 = 33%. Vaccine price, inclusion of direct control costs only, average length of stay of neonates on the neonatology ward, length of patient uptake restrictions, assuming no reduced work force due to ward closer, and presuming two outbreaks would occur in 10 years time had an impact on the ROI, see Figure 1 (for detailed information see Table A.1 in Additional file 2). The ROI increased to 1:6.6 when vaccine price was decreased to €18.30 per dose. When only direct control costs were considered in the ratio, ROI was slightly negative (1:-0.9). The ROI was 1:7.8 when average length of stay of neonates in the ward was assumed to be shorter (i.e. 7 days versus 14 days), and would be 1:2.7 if average length of stay of neonates would be 21 days. A shorter and a prolonged length of patient uptake restrictions resulted in a lower (1:2.9) and a higher (1:6.9) ROI, respectively. Assuming no reduction in the work force on the neonatology ward resulted in a ROI of 1:6.5. Presuming an outbreak would occur twice in 10 years, the ROI would be 1:7.9, if undiscounted and 1:7.4 if discounted. All other factors, including discounting, changed only slightly the calculated ROI. Tornado diagram with outcomes of the univariate and two-way sensitivity analysis. The return on investment of preventively vaccinating HCWs against pertussis to prevent a nosocomial pertussis outbreak was 1:4, meaning 4 Euros can be saved by investing one Euro on preventive vaccination of HCWs to prevent a pertussis outbreak. Total nosocomial pertussis outbreak costs in the AMC were €48,682. Direct control costs and other outbreak related costs were 24% and 76% of total costs, respectively. The majority of the costs were caused due to patient uptake restrictions on the neonatology ward of the hospital and by absence of the infected HCWs. Our findings on total outbreak costs were in accordance with Ward et al., [5] (€55,579) and Calugar et al., [1] (€76,945). However, costs reported by Baggett et al., [8] (€114,526 and €248,998) were much higher compared to our study, which was primarily the result of higher personnel costs used in Baggett et al., [8]. Our estimate of the ROI was slightly higher than calculated by Calugar et al., [1] but still in the same order of magnitude. Limitations & assumptions A major limitation of this study is the possibility of recall bias because the data on the pertussis outbreak were recalled from the year 2004. Another limitation is the narrow perspective (i.e. hospital ward) used in this study. However, using a broader perspective and including additional costs would have led to even a more favourable (i.e. higher) ROI. The assumptions made in this study led to outbreak cost estimates which can be considered as conservative. First, handling costs of several activities (e.g. drug and vaccination administration, PCRs, blood samples, and survey) were not considered as it was assumed that these activities were performed by the staff themselves during their regular working hours. Including the costs of these activities would lead to higher total outbreak costs and a higher ROI. Second, it was assumed that no further transmission of the infection took place when patient uptake restrictions on the neonatology ward were lifted. In practice, additional infections could occur after these restrictions would be lifted, which would lead to additional outbreak costs and a higher ROI. Third, additional costs related to the spread of the infection by children who were brought to other hospitals were beyond our perspective (i.e. the hospital ward) and therefore not considered. But also productivity losses due to work absence of sick parents (i.e. only fathers as mothers would be on maternity leave) were disregarded because of the restricted perspective. Both - negative externalities to the Dutch society - might be omitted if the HCWs would have been vaccinated. Fifth, it was assumed that vaccine uptake, both in existing and newly joined HCWs, would be 100%, which is slightly higher than the observed vaccine coverage in general population (i.e. 96-97%) [10]. Also did we assume that a 8-year booster vaccination would be sufficient to guarantee a 100% vaccine effectiveness, which might have been an oversimplification. Unvaccinated and or unprotected individual HCWs, however, remain a risk of infection, and as such a risk for a potential pertussis outbreak. Sixth, psychological impact on parents with newborns due to a prolonged stay and treatment in the hospital was not quantified. Seventh, the prevented outbreak costs were based on one single outbreak. A larger or a smaller outbreak in a slightly other setting might lead to higher or smaller ROI than presented in the current study. Finally, the ROI estimated in this study is based on preventing one nosocomial pertussis outbreak. Actually, the impact of immunization of HCWs may be much larger as pertussis infection occurring in infants might go unrecognized unless extensive lab diagnosis is applied. Every 2–4 years, an extra epidemic is observed with high number of pertussis cases in adolescents and adults in The Netherlands [10]. However, the number of detected nosocomial outbreaks affecting infants does not occur at the same rate which suggests that pertussis infection occurring in infants might go unrecognized. As a consequence, more nosocomial outbreaks could possibly be prevented by preventive vaccination of HCWs, which would lead to a higher ROI. Therefore, the results presented in this study can be considered as conservative. Policy implications In the Netherlands, a national vaccination recommendation of HCWs against pertussis has yet to be made. In the Dutch society, infants are not fully protected against pertussis in the first few months of their life. To provide protection to this vulnerable group, preventive vaccination of HCWs working with vulnerable infants who are not fully protected could be a relevant intervention. Considering the fact that about 76% of the outbreak costs estimated in this study were caused due to patient uptake restrictions on the neonatology ward of the hospital and by absence of the infected HCWs, it shows the importance of preventing nosocomial pertussis outbreaks and their disregarded impact. Also, it could be argued that hospitals as employers should have some responsibility in preventing nosocomial infections and protecting both, patients and staff members. Therefore, within policy decision making on vaccination recommendations, vaccinating HCWs should also be recommended. In conclusion, the current study demonstrated that from a hospital ward perspective, preventive vaccination of healthcare workers against pertussis to prevent nosocomial pertussis outbreaks does result in a positive return on investment (1:4). Therefore, preventive vaccination of healthcare workers can be considered a wise use of healthcare resources enabling the prevention of nosocomial pertussis outbreaks with the tendency to reduce, both the economic and disease burden of pertussis in both, hospital setting and the society. Ethics approval and consent As data from the nosocomial pertussis outbreak in the AMC in The Netherlands was used in an aggregated way to model the return on investment of vaccinating healthcare workers without identifying the individual participant, no written informed consent and ethical approval were required from participants to perform the data analysis. Standards of reporting As this study is not a full economic evaluation but a financial analysis based on a mathematical model, the Consolidated Health Economic Evaluation Reporting Standards checklist (CHEERS) was not necessarily suitable to be used in this case. However, while preparing this manuscript the attempt has been to follow where applicable the CHEERS guideline and to meet the reporting standards of a scientific publication. All relevant raw data used in this study are presented in the current manuscript (i.e. Tables 1, 2 and 3, Figure 1, Box A.1 in Additional file 1 and Table A.1 in Additional file 2) and will be freely available to any scientist wishing to use them for non-commercial purposes, without breaching participant confidentiality. Previous publication data Data from this manuscript has been presented as a poster presentation at the International Society for Pharmaco-economoc and Outcomes Research in Amsterdam in 2014. The abstract of this poster presentation was published in Value of Health Vol. 17, Issue 7, Page A672. 1Cost estimates are shown for 2012 price level using local Consumer Price Indexes: http://stats.oecd.org/Index.aspx?DataSetCode=MEI_PRICES#. 2Cost estimates are shown for 2012 price level using exchange rate from USD 2012 to EURO 2012: 0,778 (http://stats.oecd.org/index.aspx?queryid=169). Academic Medical Center Amsterdam CPI: HCWs: IPV: Polymerase chain reaction test ROI: United States of America Dollar The authors gratefully thank Julie Roiz and Shariar Mortazavi for their assistance on the earlier version of the model. Additional file 1: Box A.1. Case study (Nosocomial pertussis outbreak on the neonatology ward in the AMC in the Netherlands in 2004). Additional file 2: Table A.1. Applied one-way and two-way sensitivity analyses: details and results. Costs are expressed in 2012 Euros. This study was not financially supported by any source, but was conducted as a collaboration between the Academic Medical Center Amsterdam (AMC), the Julius Center for Health Sciences and Primary care of the University Medical Center Utrecht (UMCU), Utrecht University (UU), and GlaxoSmithKline (GSK) in The Netherlands. The data on the nosocomial pertussis outbreak was collected in the AMC. Data analysis was performed in collaboration between the AMC, UMCU, UU and GSK. LT is employed and paid by GSK. Parallel to this employment is LT a professional PhD-researcher at the Utrecht Institute for Pharmaceutical Sciences at UU. UMCU received a consult fee from GSK to enumerate the work done by MJM, i.e. checking the cost-analysis performed in this study and helping with the manuscript. MJM is employed and paid by UMCU. The authors declare that they have no further competing interests. LT, GF, and HdB conceptualized the design of the study. LT, MJM, and HdB performed the data acquisition. LT, MJM, and HdB developed the model and performed data analysis and interpretation. All authors contributed in drafting and revising the manuscript. All authors approved the final version of the manuscript. Division of Pharmacoepidemiology & Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, The Netherlands GlaxoSmithKline BV, Zeist, The Netherlands Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands Academic Medical Centre Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands Calugar A, Ortega-Sanchez IR, Tiwari T, Oakes L, Jahre JA, Murphy TV. Nosocomial pertussis: costs of an outbreak and benefits of vaccinating healthcare workers. Clin Infect Dis. 2006;42:981–8.View ArticlePubMedGoogle Scholar Sandora T, Gidengil CA, Lee GM. Pertussis vaccination for healthcare workers. Clin Microbiol Rev. 2008;21(3):426–34.View ArticlePubMedPubMed CentralGoogle Scholar de Serres G, Shadmani R, Duval B, Boulianne N, Déry P, Douville Fradet M, et al. Morbidity of pertussis in adolescents and adults. J Infect Dis. 2000;182:174–9.View ArticlePubMedGoogle Scholar Kurt TL, Yeager AS, Guenette S, Dunlop S. Spread of pertussis by hospital staff. JAMA. 1972;221:264–7.View ArticlePubMedGoogle Scholar Ward A, Caro J, Bassinet L, Housset B, O'Brien JA, Guiso N. Health and economic consequences of an outbreak of pertussis among healthcare workers in a hospital in France. Infect Control Hosp Epidemiol. 2005;26:288–92.View ArticlePubMedGoogle Scholar Weber DJ, Rutala WA. Pertussis: an underappreciated risk for nosocomial outbreaks. Infect Control Hosp Epidemiology. 1998;19:825–8.View ArticleGoogle Scholar Wright SW, Decker MD, Edwards KM. Incidence of pertussis infection in healthcare workers. Infect Control Hosp Epidemiology. 1999;20:120–3.View ArticleGoogle Scholar Baggett HC, Duchin JS, Shelton W, Zerr DM, Heath J, Ortega-Sanchez IR, et al. Two nosocomial pertussis outbreaks and their associated costs – King county, Washington, 2004. Infect Control Hosp Epidemiol. 2007;28(5):537–43.View ArticlePubMedGoogle Scholar Zivna I, Bergin D, Casavant J, Kelley A, Mathis S, Ellison 3rd RT. Impact of Bordetella pertussis exposures on a Massachusetts tertiary care medical system. Infect Control Hosp Epidemiol. 2007;28(6):708–12.View ArticlePubMedGoogle Scholar Van der Maas N, de Melker H, Heuvelman K, van Gent M, Mooi FR. Kinkhoestsurveillance in 2013 en 2014 [Pertussis surveillance in 2013 and 2014]. RIVM Briefreport 2014–0165. Bilthoven. The NetherlandsGoogle Scholar de Greeff SC, Mooi FR, Schellekens JFP, de Melker HE. Impact of acellular pertussis preschool booster vaccination on disease burden of pertussis in The Netherlands. Pediatr Infect Dis J. 2008;27:218–23.View ArticlePubMedGoogle Scholar Zwart B, Van Veenendaal M, Vandenbroucke-Grauls C, Kok J, Visser C. Kinkhoestuitbraak op een neonatologieafdeling. [Pertussis outbreak on a neonatology ward]. Infectieziekten Bulletin. 2007;18(3):90–1.Google Scholar Niessen WJM. Het voorkomen van verspreiding van kinkhoest op een kinderafdeling van een ziekenhuis. [Preventing the spreading of pertussis on a neonatology ward of a hospital]. Infectieziekten Bulletin. 2008;19(9):272–4.Google Scholar Statistics Netherlands. Available at: http://statline.cbs.nl/StatWeb/publication/?DM=SLNL&PA=71311ned&D1=0-1,4-5&D2=0&D3=194,219&HDR=G1,T&STB=G2&VW=T. Accessed June 2012 Hakkaart-van Roijen L, Tan SS, Bouwmans C. Update on the Dutch manual for costing in economic evaluations. Updated version 2010 [handleiding voor kostenonderzoek. Methoden en standaard kostprijzen voor economische evaluaties in de gezondheidszorg. Geactualiseerde versie 2010]. Diemen, The Netherlands: Institute for Medical Technology Assessment. The Dutch Healthcare Insurance Board; 2010.Google Scholar Medicijnkosten [Medicine costs]. Available at: http://www.medicijnkosten.nl/. Accessed June 2012 de Greeff SC, Lugnér AK, van den Heuvel DM, Mooi FR, de Melker HE. Economic analysis of pertussis illness in the Dutch population: Implications for current and future vaccination strategies. Vaccine. 2009;27:1932–7.View ArticlePubMedGoogle Scholar de VR, Kretzschmar M, Schellekens JF, Versteegh FG, Westra TA, Roord JJ, et al. Cost­effectiveness of adolescent pertussis vaccination for the Netherlands: using an individual based dynamic model. PLoS One. 2010;5(10):e13392.View ArticleGoogle Scholar European Medicines Agency. Available at: http://www.ema.europa.eu/ema/index.jsp?curl=pages/medicines/human/medicines/000296/human_med_000833.jsp&mid=WC0b01ac058001d124. Accessed September 2013 Z-Index. Available at: http://www.z-index.nl/. Accessed September 2012 Westra TA, de Vries R, Tamminga JJ, Sauboin CJ, Postma MJ. Cost­effectiveness analysis of various pertussis vaccination strategies primarily aimed at protecting infants in the Netherlands. Clin Ther. 2010;32(8):1479–95.View ArticlePubMedGoogle Scholar Fawke J, Whitehouse WP, Kudumala V. Monitoring of newborn weight, breast feeding and severe neurological sequelae secondary to dehydration. Arch Dis Child. 2008;93:264–5.View ArticlePubMedGoogle Scholar The Dutch Healthcare Authority. Available at: http://dbc-tarieven.nza.nl/Nzatarieven/top.do. DBC-code 140380. Accessed June 2012 Conrad online shop. Available at: http://business.conrad.nl/ce/nl/product/831184/Mondkapje-FFP1-8710E-20st. Accessed June 2012.
CommonCrawl
arXiv:2211.12331 (hep-ph) [Submitted on 20 Nov 2022 (v1), last revised 22 Jan 2023 (this version, v2)] Title:Axion-Like Dark Matter Detection Using Stern-Gerlach Interferometer Authors:Milad Hajebrahimi, Hassan Manshouri, Mohammad Sharifian, Moslem Zarei Abstract: Quantum sensors based on the superposition of neutral atoms are promising for sensing the nature of dark matter (DM). In this study, we utilize the Stern-Gerlach (SG) interferometer configuration to seek a novel method for the detection of detect axion-like particles (ALPs). Using an SG interferometer, we create a spatial quantum superposition of neutral atoms such as $^{3}$He and $^{87}$Rb. It is shown that the interaction of ALPs with this superposition induces a relative phase between superposed quantum components. We use the quantum Boltzmann equation (QBE) to introduce a first-principles analysis that describes the temporal evolution of the sensing system. The QBE approach employs quantum field theory (QFT) to highlight the role of the quantum nature of the interactions with the quantum systems. The resulting exclusion area demonstrates that our scheme allows for the exclusion of a range of ALP mass in the range of $10^{-10}\leq m_{a}\leq 10^{2}\,\mathrm{eV}$ and ALP-atom coupling constant in the range $10^{-13}\leq g_{ae}\leq 10^{0}$. Comments: 17 pages, 3 figures Subjects: High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); Quantum Physics (quant-ph) Journal reference: The European Physical Journal C, 83, 11, (2023) Related DOI: https://doi.org/10.1140/epjc/s10052-022-11152-9 From: Milad Hajebrahimi [view email] [v1] Sun, 20 Nov 2022 10:38:14 UTC (970 KB) [v2] Sun, 22 Jan 2023 09:18:42 UTC (973 KB) astro-ph.CO
CommonCrawl
Application and benchmark of SPH for modeling the impact in thermal spraying Stefan Rhys Jeske ORCID: orcid.org/0000-0003-3920-77651 na1, Jan Bender ORCID: orcid.org/0000-0002-1908-40271, Kirsten Bobzin2, Hendrik Heinemann ORCID: orcid.org/0000-0002-9315-62792, Kevin Jasutyn ORCID: orcid.org/0000-0001-5816-273X2 na1, Marek Simon ORCID: orcid.org/0000-0003-2426-67543, Oleg Mokrov ORCID: orcid.org/0000-0002-9380-69053, Rahul Sharma ORCID: orcid.org/0000-0002-6976-45303 & Uwe Reisgen ORCID: orcid.org/0000-0003-4920-23513 Computational Particle Mechanics volume 9, pages 1137–1152 (2022)Cite this article The properties of a thermally sprayed coating, such as its durability or thermal conductivity depend on its microstructure, which is in turn directly related to the particle impact process. To simulate this process, we present a 3D smoothed particle hydrodynamics (SPH) model, which represents the molten droplet as an incompressible fluid, while a semi-implicit Enthalpy-Porosity method is applied for modeling the phase change during solidification. In addition, we present an implicit correction for SPH simulations, based on well-known approaches, from which we can observe improved performance and simulation stability. We apply our SPH method to the impact and solidification of Al\(_2\)O\(_3\) droplets onto a substrate and perform a comprehensive quantitative comparison of our method with the commercial software Ansys Fluent using the volume of fluid (VOF) approach, while taking identical physical effects into consideration. The results are evaluated in depth, and we discuss the applicability of either method for the simulation of thermal spray deposition. We also evaluate the droplet spread factor given varying initial droplet diameters and compare these results with an analytic expression from the previous literature. We show that SPH is an excellent method for solving this free surface problem accurately and efficiently. Thermal spraying is a coating technology where particles of a feedstock material are heated, fully or partially melted and accelerated to high speeds onto a substrate. Through the impact of many particles a coating is built up. As a coating technology, thermal spraying is divided into three major process variants: flame spraying, electric arc spraying and plasma spraying [1]. In our work, we look more closely at plasma spraying, which is characterized by particle velocities of up to v = \(800\,\hbox {m s}^{-1}\) [2] and high plasma temperatures in the range of \(T = 6000\,^{\circ }\hbox {C}\) to \(T = 15{,}000\,^{\circ }\hbox {C}\), which are significantly above the melting temperature of any known material [1]. The injected particles are mostly in the size range of \(d =20\,\upmu \hbox {m}\) to \(90\,\upmu \hbox {m}\) [2]. As the particles spread upon impact, rapid solidification occurs with cooling rates in the range of \({\dot{q}}\) = \(10^7\,\hbox {K s}^{-1}\) to \(10^8\,\hbox {K s}^{-1}\) as a result of heat transfer from the liquid material to the underlying substrate and to the ambient atmosphere [3]. Thus, the particle deformation on the substrate, cooling, and solidification occur in rapid succession. The properties of the coating, such as its durability or thermal conductivity, are directly related to its microstructure, which is in turn directly related to the particle impact process. Therefore, a detailed understanding of the dynamics of particle impact on the substrate is essential for better control of the coating build-up. The deposition of particles during plasma spraying can only be poorly observed experimentally, due to the fact that splat formation and solidification occur within a few microseconds [4]. Consequently, many studies have been devoted to numerical and analytical investigation of particle impact, splat formation, and solidification [5,6,7,8,9]. Apart from the simulation of the plasma jet and the heat transfer from the plasma to the particles, the impact of the melted particle onto the substrate and its subsequent deformation are of great interest. Given this context, we present our contribution as the construction of a novel SPH model which heavily utilizes implicit solvers to great effect, and improves on both the simulation performance and stability. The novelty of our model comes from the unique combination of SPH models for the application to thermal spraying, as well as the implicit formulation of explicit models into a single unified implicit linear system. In this scope, we also propose implicit formulations for established SPH correction methods which aim to improve the physical accuracy. In order to better understand the capabilities of the SPH method for modeling droplet impacts, we quantitatively compare the performance of our own SPH implementation to the VOF method on Eulerian grids using Ansys Fluent for a highly simplified case. For this, we are able to show great agreement of the overall shape of the impacted droplet. Using our SPH method, we were also able to simulate the droplet in significantly higher resolution, while at the same time requiring a fraction of the computational cost. Furthermore, we present and discuss SPH simulation results in terms of droplet spread factor with varying initial droplet diameters. For these simulations, we explicitly discretize the substrate and take into account the transient heat transfer between droplet and substrate as well as temperature-dependent material properties. Overall, we are able to show that SPH is a very suitable method for the simulation of droplet impacts in thermal spraying. Finally, we plan to release the source code of this project as an addition to the already open-source SPlisHSPlasH [10] library. This may be very useful to engineers studying similar simulations, as the performance and stability improvements of using implicit solvers can result in faster iteration times. In the past, droplet impact simulations have often been performed with the Eulerian volume of fluid (VOF) method. The VOF method can be used for the simulation of the free surface interface between two or more immiscible fluids by tracking the volume fraction of each of the fluids in two- or three-dimensional meshes. However, in the process of particle impact a large deformation of the molten particle occurs, from spherical to a thin layer. Therefore, it requires a very fine, or spatially adaptive, mesh discretization over a large area of the simulation domain. While fine meshes require exponentially increasing computational resources, mesh adaptivity is very difficult to implement correctly and still incurs a noticeable performance penalty. Most of the computational cost of VOF algorithms is incurred by the cells that form the interfaces between different fluids [11]. Other than tracking the position of the free surface, the heat transfer and especially the effect of solidification strongly influences the dynamics of the particle impact process. A popular method to model the effect of solidification is the enthalpy-porosity method [12]. Here, the heat transfer is solved in the enthalpy formulation, with addition of a source term that is dependent on the solid fraction of the semi-solidified fluid in the so-called "mushy zone", to account for the latent heat of melting and solidification. Furthermore, another related source term is added to the momentum equation to account for increased flow resistance of the fluid in the semi-solidified region, due to growth of dendritic structures. This momentum sink, also called Darcy-term, is dependent on the permeability, which in turn is also dependent on the solid fraction in the "mushy zone". Several works have applied the VOF method for modeling particle impact in the thermal spray process. Pasandideh-Fard et al. [13] developed a 3D model to simulate the impact and solidification of a molten droplet on a flat substrate by applying the fixed velocity approach for solidification, where the solid is defined as liquid with infinite density and zero velocity. Another example is Zheng et al. [14], who developed a 3D particle impact model during plasma spraying utilizing the momentum source method of Ansys Fluent for modeling the solidification. While these studies helped to increase the understanding of the particle impact and solidification in thermal spraying, a lot of computational time was generally required. In previous works of the authors the particle impact and solidification was modeled using a modified momentum source approach [15] which was then applied to simulate multiple particle solidification [16]. The most recent work presents a calculation of the effective thermal conductivity using information of inter-splat gaps derived from a simulation of multiple splats of Al\(_{2}\)O\(_{3}\) droplets [17]. However, while the experimental results showed the characteristic length of the gaps to be \(<1\,\upmu \hbox {m}\), the simulation could only resolve gaps with a width of its cell size of \(2.25\,\upmu \hbox {m}\). This was resolved by numerically smearing the particle boundaries in this model and then enabling an approximately correct calculation of the thermal conductivity. However, this was only an interim solution, not a physically correct representation of the problem. Due to the high computational cost of VOF-based Eulerian approaches, there has been a range of works applying the smoothed particle hydrodynamics (SPH) method to the simulation of thermal spray deposition. Although the SPH method was originally introduced by Gingold and Monaghan [18] and Lucy [19] in the field of astrophysics, the method has already been used frequently for the simulation of the particle impact process. This is often due to the versatility regarding the simulation of changing domain topology, free surfaces as well as multiple phases. Since the feedstock material in the thermal spray process consists of particles and at the same time the SPH method is based on discretization particles, first and foremost, the terms should be properly distinguished. For this reason, the term particle is from now on used for the SPH discretization particle of the numerical method, while the feedstock particle that is molten in the thermal spray process and projected towards the surface will be termed droplet. Fang et al. [20] introduce many ideas for the SPH simulation of droplet spreading and solidification. They propose an improved pressure correction scheme and show simulations of droplets impacting on a solid substrate as well as similarities to some images obtained from experiments. In addition, for heat conduction, an artificial heat model based on internal energy is used. Particles are classified into liquid, melting and solid, and a source term in the momentum equation accounts for phase change. Results are then compared against an experiment. A similar approach is pursued by Zhang et al. [21] without the pressure correction, but also considering melting of the substrate for high thermal conductivities, low thermal capacities and high droplet temperatures. However, a validation of the method was not performed. Farrokhpanah et al. [22, 23] present a novel method for the simulation of latent heat in SPH with specific application to suspension plasma spraying, which is a variant of the thermal spray process. They use an explicit weakly compressible SPH approach with advected density, as well as an enthalpy-viscosity method to model the process of solidification. Abubakar and Arif [24] introduce a hybrid approach for the simulation of spray deposition. The SPH method is used to model the dynamics of splat formation during the spray process, while the finite element method (FEM) is used in order to model the solidification and compute residual stresses. The results are validated qualitatively by comparison with experimental results in literature. While this approach is quite novel, it relies on a very complex system in which numerical errors can occur at many different places (especially during transfers between discretizations). A coupled Eulerian and Lagrangian approach is also pursued by Zhu et al. [25] where it is used in order to simulate spray deposition of semi-molten ceramic droplets. Other relevant SPH works, especially regarding heat transfer, melting and solidification can be found in the simulation of arc welding processes [26,27,28]. Komen et al. [26] in particular also simulate the impact of molten droplets onto a substrate and investigate the effect on the weld pool, although the droplet impact velocities are significantly smaller and the droplet size is significantly larger than in thermal spraying. Computational method SPH discretization In the following, we briefly outline our SPH discretization method of the Navier–Stokes equations for incompressible fluid flow. In general, SPH is a total Lagrangian discretization method which implies that fluid quantities are observed at positions which move along with the fluid. These discrete positions, or particles, are advected and tracked through time and carry associated field quantities with them. The SPH method uses a weighted interpolation, derived from the convolutional identity with the \(\delta \)-distribution, in order to compute unknown quantities and derivatives needed to solve partial differential equations (PDEs). An arbitrary scalar quantity \(A_i = A(\varvec{x}_i)\) at particle position \(\varvec{x}_i\) can be computed by weighted summation using $$\begin{aligned} A_i = \sum _{j\in {\mathcal {N}}_i} V_j A_j W(\varvec{x}_i - \varvec{x}_j; h), \end{aligned}$$ where \(W(\varvec{x}_i - \varvec{x}_j; h)\) is a compactly supported weighting function, the commonly used cubic-spline kernel function in our case, around particle i with smoothing length h and \(\sum _{j\in {\mathcal {N}}_i}\) denotes a summation over the neighboring particles j of particle i, which lie within the compact support of W centered on particle i. Derivatives can easily be computed by differentiating Eq. (1) which shifts the derivative operator to the weighting function. When doing this, however, care has to be taken since the commonly used cubic-spline kernel does not have a smooth second derivative. For more information on derivatives, momentum conserving SPH sums, and SPH in general, the reader is referred to the works of Price [29] and Koschier et al. [30]. Incompressible fluid model The equations typically used for the simulation of incompressible fluids are the continuity equation, Eq. (2), and the Navier–Stokes equation, Eq. (3): $$\begin{aligned} \frac{D\rho }{Dt}&= 0 \quad \leftrightarrow \quad \frac{\partial \rho }{\partial t} = -\rho \nabla \cdot \varvec{v} \end{aligned}$$ $$\begin{aligned} \rho \frac{D\varvec{v}}{D t}&= -\nabla p + \mu \nabla ^2 \varvec{v} + \varvec{f}_{\hbox {ext}} + \varvec{f}_{\hbox {st}}. \end{aligned}$$ Here, \(\rho \) denotes the fluid density (kg m\(^{-3}\)), \(\varvec{v}\) the velocity (m s\(^{-1}\)), p the pressure (N m\(^{-2}\)), \(\mu \) the dynamic viscosity (Pa s), \(\varvec{f}_{\hbox {ext}}\) the external volumetric forces (N m\(^{-3}\)), e.g., gravity, and \(\varvec{f}_{\hbox {st}}\) the force due to surface tension (N m\(^{-3}\)). The pressure force is computed using the divergence-free SPH (DFSPH) method as presented by Bender and Koschier [31], which is an implicit solver ensuring both constant density and a divergence-free velocity field, see Eq. (2). We have found that this method allows us to use larger time steps during simulation, in contrast to explicit pressure solvers which compute the pressure force using an equation of state (EOS), e.g., used by Farrokhpanah et al. [23] and described by Monaghan [32]. Also, recomputing the density (as is performed in DFSPH) in each time step avoids the possible loss of volume when advecting the local density using the continuity equation, Eq. (2). While implicit pressure solvers have been explored in related works, e.g., see Fang et al. [20], DFSPH also enforces a divergence-free velocity field which has been shown by Bender and Koschier [31] to improve the stability of the simulation. The viscosity force is also computed implicitly using the model by Weiler et al. [33]. It is obtained by solving for accelerations \(\varvec{a}_{\hbox {visc}}\) such that $$\begin{aligned} \varvec{a}_{\hbox {visc}} = \frac{\varvec{v}_{\hbox {visc}}^{t+1} - \varvec{v}^t}{\varDelta t} = \nu \nabla ^2 \varvec{v}_{\hbox {visc}}^{t+1}. \end{aligned}$$ Discretizing this equation yields a system of linear equations for the velocity \(\varvec{v}_{\hbox {visc}}^{t+1}\) which is then used to compute the resulting acceleration due to viscous forces \(\varvec{a}_{\hbox {visc}}\) using the finite difference formula in Eq. (4). In general, the extensive use of implicit solvers enables the usage of larger simulation time steps without causing instabilities. Surface tension computation in SPH is known to be a challenging problem, since it is very difficult to obtain a clear definition of the fluid surface. There exist formulations based on fluid surface curvature, often derived from the continuum surface force (CSF) model by Brackbill et al. [34], as well as formulations based on intermolecular forces. For our purposes, we have implemented the CSF model of Müller et al. [35] based on the model of Morris [36], where force \(\varvec{f}_{i, \text {st}}\), curvature \(\nabla ^2 c_i\) and surface normal \(\varvec{n}_i\) are computed from a smoothed color field \(c_i\): $$\begin{aligned} c_i&= \sum _j \frac{m_j}{\rho _j} W_{ij}, \end{aligned}$$ $$\begin{aligned} \varvec{n}_i&= \sum _j \frac{m_j}{\rho _j} (c_j - c_i) \nabla W_{ij}, \end{aligned}$$ $$\begin{aligned} \nabla ^2 c_i&= -\sum _j \frac{m_j}{\rho _j} (c_i - c_j) \frac{2 ||\nabla W_{ij}||}{||\varvec{x}_i - \varvec{x}_j|| + \varepsilon }, \end{aligned}$$ $$\begin{aligned} \varvec{f}_{i, \text {st}}&= -\sigma \nabla ^2 c_i \frac{\varvec{n}_i}{||\varvec{n}_i||}. \end{aligned}$$ Here, \(\sigma \) denotes the surface tension coefficient (\(\hbox {N m}^{-1}\)). The computation of the normals and especially the curvature is documented to be prone to errors due to particle disorder; however, we have not observed any significant instabilities in our simulations. This could be due to the extremely small time scale of our simulations as well as due to other forces being more dominant. Solidification Solidification is often considered to be one of the main determining factors of the dynamics of the thermal spray process. It depends on the splat thickness, the thermal conductivities of both the sprayed feedstock material as well as the underlying solid material, and the thermal contact resistance between the flattening droplet and the substrate. It directly affects the deformation behavior, the splat shape and the coating microstructure [2]. The fluid of the droplet is cooled upon contact with the wall and the subsequent solidification process is modeled by taking into account a Darcy term (momentum sink), after the well-known enthalpy-porosity method, for modeling the solidification of pure metals [37] and of binary alloys [38]. The latent heat of melting and solidification is neglected for the initial comparisons between our SPH model and Ansys to keep the comparison as simple as possible. However, as the heat transfer solver is formulated in terms of the enthalpy, it can be easily extended to include the latent heat, which is shown in the simulations presented in Sect. 4.2. The considered Darcy term adds a deceleration to the Navier–Stokes equation which has a strong movement inhibiting effect on the fluid, once the temperature of the fluid becomes low enough. This so-called momentum sink accounts for the semi-liquid state in the so-called mushy zone, where already some nucleation and dendrite growth has occurred, thereby affecting the properties of the fluid. The effect is controlled by the liquid fraction \(f_{\hbox {l}}(T)\), here modeled as a simple Heaviside function, Eq. (10), and by a morphological constant C: $$\begin{aligned} \varvec{a}_{\hbox {porosity}}&= -\varvec{v} C f_{\hbox {l}}(T) \end{aligned}$$ $$\begin{aligned} f_{\hbox {l}}(T)&= {\left\{ \begin{array}{ll} 0 &{} T > T_{\hbox {l}}\\ 1 &{} T_{\hbox {l}}-\varDelta T_{\hbox {l}} \le T \le T_{\hbox {l}}\\ - &{} T \le T_{\hbox {l}}-\varDelta T_{\hbox {l}}. \end{array}\right. } \end{aligned}$$ In this equation, C has the unit (s\(^{-1}\)) which, intuitively, is related to the time span required during which the fluid will solidify completely, given that there are no other influences. In order to be able to capture the solidification process, regardless of simulation method, the maximum time step of the simulation should be selected to be smaller than \(C^{-1}\). The liquidus temperature is denoted by \(T_{\hbox {l}}\) and the temperature range of the mushy region by \(\varDelta T_{\hbox {l}}\), such that the fluid is assumed to be completely solid as soon as \(T \le T_{\hbox {l}}-\varDelta T_{\hbox {l}}\), see the last case of Eq. (10). The values for C are often very large, resulting in very large deceleration as soon as \(T\le T_{\hbox {l}}\), so large in fact that simulations using explicit time stepping can become unstable. These instabilities are a result of the material solidifying in less than a single simulation step. This is remedied by constructing an algebraic equation which computes the acceleration using the projected velocity of the next time step as $$\begin{aligned} \varvec{a}_{\hbox {porosity}}&= \frac{\varvec{v}^{t+1} - \varvec{v}^{t}}{\varDelta t} = -\varvec{v}^{t+1} C f_{\hbox {l}}(T)\end{aligned}$$ $$\begin{aligned} \varvec{v}^{t+1}&= \varvec{v}^t \frac{1}{1 + \varDelta t C f_{\hbox {l}}(T)}. \end{aligned}$$ The acceleration is then simply computed by inserting the expression for \(\varvec{v}^{t+1}\) $$\begin{aligned} \varvec{a}_{\hbox {porosity}} = \frac{\varvec{v}^t}{\varDelta t}\left( \frac{1}{1 + \varDelta t C f_{\hbox {l}}(T)} - 1\right) . \end{aligned}$$ This semi-implicit formulation allows the usage of larger time steps in the simulation without causing instabilities, but comes at the cost of slightly dampening the effect. Additionally, if the time step is very large, the solidification may occur very quickly. Nevertheless, since the observed time intervals are often too large to observe the solidification process of single particles anyway, this is deemed to be an acceptable trade-off. The parameters used for the momentum sink are shown in Sect. 3.4. Correction terms In our implementation, we have found that it is also necessary to add correction terms, which improve the quality of SPH simulations. The first such term was documented by Monaghan [39] and reduces the interpenetration of particles by smoothing the velocity field while conserving linear and angular momentum, without adding dissipation. The acceleration of this correction, also sometimes called XSPH, is given by $$\begin{aligned} \varvec{a}_{i,{\hbox {xsph}}} = - \frac{\alpha }{\varDelta t} \sum _{j\in {\mathcal {N}}_i} \frac{{\overline{m}}_{ij}}{{\overline{\rho }}_{ij}} (\varvec{v}_i - \varvec{v}_j) W_{ij}, \end{aligned}$$ where \(\alpha \) denotes the (dimensionless) strength of this smoothing, \(\overline{m_{ij}}\) the average mass between particle i and j and \(\overline{\rho _{ij}}\) the averaged density. In the original work \(\alpha = 1\) was proposed, yet we have found smaller values, in the range of 0.1 to 0.3, to also work very well. The second correction addresses the issue of tensile instability at the surface of SPH fluids. When using a method which recomputes the density instead of advecting it, it occurs that the density estimate at free surfaces is erroneous due to missing particles and causes an uncontrollable artificial surface tension effect. This is solved in the DFSPH method by only considering "over"-pressures due to larger density values and clamping smaller densities to the rest density. This entirely removes the instability at the surface, but comes at the cost of reduced, non-surface tension, fluid cohesion. In order to restore fluid cohesion, a corrective force of the form $$\begin{aligned} \varvec{a}_{i,{\hbox {cohesion}}} = -\gamma ^{\hbox {f}} \sum _{j\in {\mathcal {N}}_i^{\hbox {f}}} \frac{{\overline{m}}_{ij}}{{\overline{\rho }}_{ij}} (x_i - x_j) W_{ij}, \end{aligned}$$ is employed, where \(\gamma ^{\hbox {f}}\) is a parameter controlling the strength of cohesion (\(\hbox {N m}^{-1}\)) and \({\mathcal {N}}_i^{\hbox {f}}\) denotes the neighborhood of fluid particle i within the same fluid phase. Adhesion to other phases and boundaries is formulated analogously $$\begin{aligned} \varvec{a}_{i,{\hbox {adhesion}}} = -\gamma ^{\hbox {b}} \sum _{j\in {\mathcal {N}}_i^{\hbox {b}}} \frac{{\overline{m}}_{ij}}{{\overline{\rho }}_{ij}} (x_i - x_j) W_{ij}, \end{aligned}$$ where \(\gamma ^{\hbox {b}}\) is a parameter controlling the strength of adhesion (\(\hbox {N\,m}^{-1}\)) and \({\mathcal {N}}_i^{\hbox {b}}\) denotes the neighborhood of fluid particle i within the boundary phase. It should be noted that for the case that \(\gamma ^{\hbox {b}} = \gamma ^{\hbox {f}}\) the net cohesive–adhesive force at the boundary is zero. Adhesion is therefore modeled by \(\gamma ^{\hbox {b}} > \gamma ^{\hbox {f}}\) and repulsion by \(\gamma ^{\hbox {b}} < \gamma ^{\hbox {f}}\). By design, this correction term only adds forces in regions with particle deficiency and is inspired by the work of Monaghan [40], yet instead of adding an additional repulsion term the attraction is first clamped by the implicit pressure solver and then reintroduced as cohesion and adhesion. A very similar cohesive force was also used by Becker and Teschner [41]. We note that due to the anti-symmetric nature of these formulations, they conserve angular and linear momentum. In order to further improve the stability and in order to be able to use larger time steps, we formulate these corrections implicitly in terms of velocity and incorporate them, together with the viscosity force, into a single linear system $$\begin{aligned} \begin{aligned} \frac{\varvec{v}_i^{t+1} - \varvec{v}_i^t}{\varDelta t}&= \frac{\mu }{\rho _i} 2(d+2) \sum _{j\in {\mathcal {N}}_i} \frac{{\overline{m}}_{ij}}{\rho _j} \frac{\varvec{v}_{ij}^{t+1}\cdot \varvec{x}_{ij}}{||\varvec{x}_{ij}||^2 + 0.01 h^2} \nabla W_{ij} \\&\quad - \frac{\alpha }{\varDelta t} \sum _{j\in {\mathcal {N}}_i} \frac{{\overline{m}}_{ij}}{{\overline{\rho }}_{ij}} \varvec{v}_{ij}^{t+1} W_{ij} \\&\quad -\gamma ^{\hbox {f}} \sum _{j\in {\mathcal {N}}_i^{\hbox {f}}} \frac{{\overline{m}}_{ij}}{{\overline{\rho }}_{ij}} (x_i - x_j) W_{ij} \\&\quad -\gamma ^{\hbox {b}} \sum _{j\in {\mathcal {N}}_i^{\hbox {b}}} \frac{{\overline{m}}_{ij}}{{\overline{\rho }}_{ij}} (x_i - x_j) W_{ij}, \end{aligned} \end{aligned}$$ where \(\varvec{v}_{ij} = \varvec{v}_i - \varvec{v}_j\), \(\varvec{x}_{ij} = \varvec{x}_i - \varvec{x}_j\) and d is the number of spatial dimensions, i.e., \(d=3\). This linear system is solved using the matrix-free conjugate gradient method. We are not aware of any other works utilizing implicit solvers in SPH simulations to this degree. Doing this, we are able to observe a very significant performance improvement due to the ability of simulating with large time steps without causing instabilities. Pseudocode of our full fluid solver is shown in Algorithm 1. It can be seen that the only explicitly computed component is the surface tension, while the pressure and remaining forces are independently implicitly integrated. Heat conduction is governed by the Fourier equation $$\begin{aligned} \rho c_p\frac{D T}{D t} = \nabla \cdot \left( \lambda \nabla T \right) + {\dot{q}}''', \end{aligned}$$ where \(\rho \) denotes the material density (kg m\(^{-3}\)), \(c_p\) is the specific heat capacity (J kg\(^{-1}\) K\(^{-1}\)), T is the temperature (K), \(\lambda \) is the thermal conductivity (W K\(^{-1}\) m\(^{-1}\)) and \({\dot{q}}'''\) is the contribution from volumetric heat sources (W m\(^{-3}\)). In all of our simulations, we set \({\dot{q}}''' = 0\), since we do not need any external heat sources. Instead of using the temperature as the main variable for heat transfer, we transform Eq. (18) using the relationship between specific enthalpy h (J kg\(^{-1}\)) and the temperature T $$\begin{aligned} h(T) = \int _0^T c_p(T) dT, \end{aligned}$$ where \(c_p\) may also be a function of temperature, taking into account, e.g., the latent heat of melting. Assuming that \(c_p(T)\) is continuous results in a bijective function, such that h(T) as well as T(h) are well-defined. This results in the following equation which uses both the specific enthalpy h as well as the temperature: $$\begin{aligned} \rho \frac{Dh}{D t} = \nabla \cdot \left( \lambda \nabla T \right) . \end{aligned}$$ The equation above is discretized using SPH and explicit Euler time integration, resulting in the following discrete equation for the fluid particle with index i $$\begin{aligned} \rho _i \frac{h_i^{t+1} - h_i^t}{\varDelta t} = \nabla \cdot (\lambda \nabla T)_i^t. \end{aligned}$$ The discretization of the heat conduction term is given in the following equation: $$\begin{aligned} \nabla \cdot (\lambda \nabla T)_i = \sum _{j\in {\mathcal {N}}_i}\frac{m_j}{ \rho _j}\frac{4\lambda _i\lambda _j}{\lambda _i + \lambda _j}(T_i - T_j)\frac{ \nabla _i W_{ij}\cdot \varvec{r}_{ij}}{||\varvec{r}_{ij}||^2}, \end{aligned}$$ as is also the case for other related work, e.g., Zhang et al. [21], and was initially proposed by Brookshaw [42]. It should be noted that \(\lambda _i = \lambda (T_i)\) is generally a function of temperature and that \(\varvec{r}_{ij} = \varvec{x}_i - \varvec{x}_j\) is the vector between the positions of particle i and particle j. Additionally, \(\nabla _i W_{ij}\) denotes the gradient of \(W_{ij} = W(\varvec{x}_i - \varvec{x}_j; h)\) with respect to the position of particle \(\varvec{x}_i\). Since heat can only be conducted within the material itself, the SPH formulation is adiabatic by construction. Finally, in order to reduce the computational requirements during simulation, the enthalpy is precomputed in terms of the temperature by integration of Eq. (19). Our heat solver algorithm is outlined in Algorithm 2, while the overall simulation pipeline is summarized in Algorithm 3. Boundary contributions for all SPH terms are computed using the approach of Akinci et al. [43]. The only boundary present in our simulation is the substrate, which will be described further in Sect. 3.4. Using the approach of Akinci et al., the boundary is sampled using a single layer of particles on the surface of the boundary. The contribution of the boundary to the SPH summation of fluid particles can be generalized as $$\begin{aligned} A_i = A_i^{\hbox {f}} + A_i^{\hbox {b}} = \sum _{j \in {\mathcal {N}}_i^{\hbox {f}}} V_j A_j W_{ij} + \sum _{j \in {\mathcal {N}}_i^{\hbox {b}}} V_j^{\hbox {b}} A_j^{\hbox {b}} W_{ij}. \end{aligned}$$ The superscript f indicates contribution from fluid particles, while b indicates contribution from boundary particles within the compact support of particle i. The volume of boundary particles is computed using $$\begin{aligned} V_i^{\hbox {b}} = \frac{1}{\sum _{j\in {\mathcal {N}}_i^{\hbox {b}}} W_{ij}}, \end{aligned}$$ which is an SPH summation over the other boundary particles in the compact support of boundary particle i. The boundary volumes are incorporated into the summations of fluid particles by extending fluid quantities into the boundary region. This means for example using the rest density of the fluid to compute a mass from the boundary volume and utilizing this contribution for fluid density computations. Similar considerations can be made for all other cases, such as for the Fourier equation. The heat conduction term in Eq. (22) is extended by the following term for the boundary contribution $$\begin{aligned} \nabla \cdot (\lambda \nabla T)_i^{\hbox {b}} = \sum _{j\in {\mathcal {N}}_i^{\hbox {b}}} V_j^{\hbox {b}} 2\lambda _i (T_i - T_j^{\hbox {b}})\frac{ \nabla _i W_{ij}\cdot \varvec{r}_{ij}}{\varvec{r}_{ij}^2}. \end{aligned}$$ This is equivalent to Eq. (22), when also using \(\lambda _j^{\hbox {b}} = \lambda _i\) and prescribing the wall temperature \(T_j^{\hbox {b}}\). This allows specifying a Dirichlet boundary condition on the substrate. According to Mostaghimi et al. [44] the estimated heat loss of the droplet to the surrounding gas is roughly three orders of magnitude lower than that of heat conduction into the substrate. Therefore, we assume the free surface of the droplet to be adiabatic, i.e., we neglect heat losses of the droplet into the surrounding gas for our current investigations. Simulation domain In the previous sections, we introduced our simulation model for droplet impact onto a substrate with solidification. Subsequently, we describe the simulation setup used to compare our SPH model with Ansys Fluent. Figure 1 shows the simulation domain for droplet impact. This includes the material properties, the initial droplet in-flight properties, the substrate wall properties and boundary conditions. The material properties of the ceramic droplet are listed in Table 1. Further, the simulation parameters in Ansys Fluent and for SPH are listed in Table 2. Schematic diagram of the simulation domain for the droplet impact Table 1 Material properties of ceramic droplet and boundary conditions Table 2 Simulation parameters A 3D thermal spray coating build-up model based on a previous publication of the authors [17] was created and implemented in Ansys Fluent. In this model, the impact of thermally sprayed ceramic droplets onto a flat substrate was simulated. A momentum source function was used to simplify the calculation of the solidification process, with parameters as shown in Table 2c. A laminar viscous model was used, and the energy solver was enabled. The dimensions of the spatial domain are \(225\,\upmu \hbox {m} \times 225\,\upmu \hbox {m} \times 75\,\upmu \hbox {m}\), incorporating the droplet as well as a surrounding gaseous atmosphere which was assumed to be air. To shorten the computation time for the simulation of the impact of multiple droplets, a mesh edge length of \(2.25\,\upmu \hbox {m}\) was chosen. The calculation mesh consists of 330,000 cells. The boundaries of the domain consist of an inlet for the droplet on top, with velocity \(\varvec{v}_{p}= 200\,\hbox {m s}^{-1}\), the substrate as a free-slip wall at the bottom and outlets with pressure \(p_{\hbox \mathrm{amb}}=101,325\) Pa and backflow total temperature \(T_{\hbox {out}}=3000\) K. The interior domain is filled with air at \(T_{\hbox {gas}}=2400\) K at the start of the simulation. The temperature of the substrate is set to \(T_{\hbox {substrate}} = 300\) K. The free-slip boundary condition was applied for the contact between the droplet and the substrate, and a VOF approach was assumed for the calculation of the free surface of the ceramic droplet and the surrounding gas phase. Due to the rapid solidification resulting from the Dirichlet thermal boundary condition, the free-slip boundary condition appeared to be a reasonable assumption in our experiments. The numerical parameters of the simulation in Ansys Fluent are given in Table 2b. The simulations were performed using Ansys Fluent 2020 R2. The same 3D thermal spray coating build-up model was created using our SPH method. The domain of the simulation model is shown in Fig. 2a. The numerical parameters for the simulation and for the momentum sink are each listed in Table 2a and c, respectively. The droplet has a diameter of \(d = 62\,\upmu \hbox {m}\) and initial in-flight properties such as temperature of \(T = 2500\) K and velocity of \(v = 200\,\hbox {m s}^{-1}\). The droplet was discretized with particles with an individual radius of \(r = 0.4\,\upmu \hbox {m}\) and consists of 238,310 particles. The substrate was modeled as a rigid body with a Dirichlet boundary condition for the temperature of \(T_{\hbox {wall}} = 300\) K and a free-slip boundary condition for the momentum equation. The model was implemented in a custom branch of SPlisHSPlasH [10]. Computational domain of droplet impact simulation In the following, we evaluate our SPH model in two different settings. First, we compare the results of our SPH model to the results obtained by a simulation using Ansys Fluent with the VOF method. For this, we consider simplified material parameters and boundary conditions in order to eliminate possible modes of deviation and inaccuracy in both methods. We first attempt to find sufficient discretization densities by conducting a mesh convergence analysis using the diameter of the splat as convergence indicator value. Additionally, we are able to show excellent agreement between the two methods, while our SPH method requires only a fraction of the computational cost for a higher density discretization. Afterwards, we take into account temperature-dependent material properties, the latent heat of melting and the heat transfer into the substrate. With this, we evaluate the obtained splat shapes and spread factors of different diameter droplets and validate these results by comparison with an analytical expression. Convergence analysis of the simulation models in SPH and Ansys Fluent Comparison to Ansys Fluent Mesh convergence A sensitivity analysis of the mesh and particle resolution was conducted. The droplet impact simulation was run with different mesh sizes and particle resolutions. Both simulations were computed on 32 cores of a high-performance compute cluster. For the simulation in Ansys Fluent, the coarser, coarse, reference, and fine computational meshes consist of 83,349, 165,099, 330,000 and 540,000 cells, respectively. For the simulation using our SPH method, the coarser, coarse, reference, and fine particle resolution consist of 60,112, 119,129, 238,310 and 476,486 particles, respectively. The results in Fig. 3 show that as the resolution increases in SPH and Ansys Fluent, the diameters of the splats converge and remain nearly identical at resolutions above these values. In addition, the increasing resolution leads to a longer computation time. The computation time required to solve the simulation in Ansys Fluent from coarser to finer mesh resolution is about 5, 8, 20 and 30 min. On the other hand, SPH generally requires less computation time, which is about 1, 4, 5 and 20 min from coarser to finer particle resolution. Therefore, the reference mesh size of \(2.25\,\upmu \hbox {m}\) and the reference particle radius of \(0.4\,\upmu \hbox {m}\) are used in the further simulations for each method, respectively. Comparison of cross-sections of the droplet impact in SPH (left) and Ansys Fluent/VOF=0.5 (right), for several points in time Results Figure 4 shows the droplet impact and the subsequent spreading of the droplet and solidification process modeled with SPH (left) and Ansys Fluent (right). The main dynamics of the process occurs at the shown three points in time. It can be seen that there is a relatively high agreement between both methods. However, it should be noted that the Ansys Fluent approach is performed with the VOF method for the modeling of the free surface of the liquid and therefore the presented shape represents the iso-surface of volume fraction 0.5 of the ceramic phase. While the overall resolution of the mesh is quite high, the mesh is relatively coarse in the region of interest, as shown in Fig. 5. As such, the dispersion of the fluid boundary surface can be considerable in the VOF method and the apparent area of the cross section appears somewhat smaller than the area of the cross section in the SPH method, although in both cases the total mass is conserved. Nevertheless, Fig. 6 shows the decrease in the ratio of mass enclosed by the VOF-0.5 contour with respect to the total fluid mass, as the mass disperses across a larger region. After impact at \(t = 0.5\,\upmu \hbox {s}\), the mass share of the VOF-0.5 contour of the droplet decreases steadily and then levels out at \(t = 1.4\,\upmu \hbox {s}\). Cross section of volume fraction of the ceramic phase in Ansys Fluent at \(t = 1.0\,\upmu \hbox {s}\) Mass share of the ceramic phase enclosed within the volume VOF\(\ge 0.5\) of the total mass of the phase Another peculiarity is the shape of the droplet in free flight. While in the SPH method, the shape is highly spherical, the VOF droplet appears to be elliptically compressed in the direction of flight in the VOF method. While the surrounding region of the droplet is filled with stagnant air, the droplet itself is immersed in an airstream of the impact velocity in order to avoid compression due to drag. As such, the slightly compressed shape can be explained by difficulties of achieving a perfect droplet shape using a transient inlet function. Top view of the simulated splats in SPH (bottom) and Ansys Fluent/VOF=0.5 (top) for several times. Times t = \(1.5\,\upmu \hbox {s}\) and \(t = 2.0\,\upmu \hbox {s}\) for Ansys Fluent are included for reference, although the resolution of the mesh is too low for the splat thickness to derive meaningful results Figure 7 presents the top view for the droplet impact for several points in time. It can be seen that the shape of the splats in Fluent and SPH show an almost perfectly symmetric shape. Furthermore, it was observed that the splat seems to cool off faster in Fluent than in SPH. However, it should be noted here again, that the visible surface in Ansys Fluent corresponds to the volume fraction 0.5 of the ceramic phase. Additionally, for times \(t \ge 1.0\,\upmu \hbox {s}\) the thickness of the splat became smaller than 5 to 6 mesh cells (see also Fig. 5), which is problematic in the VOF approach, as it disperses the boundary of the free surface and therefore requires several mesh cells for the transition from one fluid to the other. When the ratio of the number of cells in the transition region to the number of cells with volume fraction 1.0 becomes very large, it becomes more difficult to make accurate observations about the enclosed volume. This is due to the fact that observations about the enclosed volume become highly sensitive to the selection of the contoured volume fraction. It is therefore concluded that the results for \(t \ge 1.0\,\upmu \hbox {s}\) should be considered with care, but they are included in this figure for reference. The diameter and the height of the formed splat were compared for SPH and Ansys Fluent for several volume fractions of the ceramic phase (0.1, 0.5, 0.9) over time in Fig. 8. The droplet impacted the substrate at \(0.5\,\upmu \hbox {s}\) and subsequently spread out, gaining in diameter and losing in height until it reached a steady state. When examining the diameter and height of the splat calculated with the VOF method in Ansys Fluent, it was found that the dimensions vary significantly depending on the volume fraction, which is in accordance with the observation shown in Fig. 5. Figure 8a shows that there is a good agreement in diameter for SPH compared with the VOF method. The diameter calculated with SPH lies between that of volume fraction 0.5 and 0.9. It should be noted that the parameter of the cohesion correction, see Sect. 3.1.6, was adjusted manually to reach this agreement. As discussed in the analysis of Fig. 7, the presented results of Ansys Fluent for \(t \ge 1.0\,\upmu \hbox {s}\) do not have a sufficient mesh resolution, despite having a total of 330k cells, and are therefore not discussed further. Comparison of the diameter and height of the simulated splats over time for SPH and Ansys Fluent at several volume fractions (0.1, 0.5, 0.9) of the ceramic phase The height, as shown in Fig. 8b, shows the same tendency for both approaches but a consistently smaller height was observed for all volume fractions of the ceramic phase in Ansys Fluent when compared to SPH. This is again consistent with the observed decrease in cross-section area in Fig. 5 and in mass share of the droplet in Fig. 6. However, since both the height as well as the diameter have a strong influence on the heat transfer from the splat to the substrate, this difference is of high significance. We attribute higher confidence to the SPH result regarding the height, because it does not suffer from the aforementioned volume dispersion. Comparison of the maximum, minimum and average radial and axial velocity of the droplet Figure 9 shows a comparison of the simulated maximum, minimum and average velocity over time taken over a half space in radial (x) and height (y) direction of Ansys Fluent (at VOF\(=0.5\)) and SPH. As the droplet impacts the substrate at \(t= 0.5\,\upmu \hbox {s}\), it can be seen in Fig. 9a that shortly after impact, a very strong increase in the maximum radial velocity from the initial maximum radial velocity of \(0\,\hbox {m s}^{-1}\) occurs for both methods. This increase reaches roughly \(350\,\hbox {m s}^{-1}\) for SPH, while the maximum radial velocity reaches nearly \(600\,\hbox {m s}^{-1}\) in the case of Ansys Fluent. After this initial increase, the maximum radial velocity decreases towards zero for both cases for the time frame considered. Compared to this, the average velocity taken over a half space of both cases shows good agreement, with SPH exhibiting a consistently lower average radial velocity of the whole time frame considered. Furthermore, it can be observed that the minimum velocity in the Ansys Fluent case remains zero for the entire duration, while the minimum velocity in SPH becomes negative, with a small but distinct minimum of approximately \(-40\,\hbox {m s}^{-1}\) at the moment of impact. During the time shortly after impact, the negative velocities in radial direction are somewhat contrary to the expected dynamic of the process, in which the fluid of the droplet would spread outward (positive velocity in radial direction) to form the splat. Upon further investigations, these particles are generally located near or on the cut plane where due to particle disorder some particles accelerating in negative \(\times \) direction may appear. In Fig. 9b, the maximum, minimum and average velocity in vertical direction are shown. Please note that the droplet moves towards the substrate, i.e., in negative y-direction. After impact at \(t = 0.5\,\upmu \hbox {s}\), the maximum vertical velocity decays smoothly to zero in the case of Ansys Fluent. In contrast to this, the observed maximum vertical velocity shows a slightly different behavior in SPH. It has a peak of almost \(-400\,\hbox {m s}^{-1}\) at the time of impact t = \(0.5\,\upmu \hbox {s}\) before decreasing smoothly towards zero. The maximal vertical velocities of both cases for \(t \ge \) \(0.6\,\upmu \hbox {s}\) show an otherwise excellent agreement. Similarly, the minimum velocity drops rapidly after impact and remains at zero or close to zero in the case of Ansys Fluent, while the minimum velocity simulated in SPH shows a different post-impact behavior. At the time of the impact, the minimum velocity reaches an absolute minimum of roughly \(50\,\hbox {m s}^{-1}\) before approaching zero. It is noticeable that the peak of the maximum velocity precedes the negative peak of the minimum velocity. While this can be understood in terms of a rebound effect, the reason for the large spread between minimum and maximum velocity, as well as the deviation with Ansys Fluent of these observables will be discussed later in detail in the analysis of Fig. 10. Finally, the average velocities of both simulation methods have an excellent agreement over time, even better than was the case for the radial velocity in Fig. 9a. The droplet starts with a velocity of \(-200\,\hbox {m s}^{-1}\) in both methods, then the average velocity in vertical direction decreases gradually after impact and reaches zero at \(t = 1.0\,\upmu \hbox {s}\). A more detailed analysis of the apparent disagreement noted in Fig. 9b can be seen in Fig. 10 for the moment of impact at t = \(0.5\,\upmu \hbox {s}\). Figure 10 shows that a small fraction of particles at the side of the droplet have a very high vertical velocity of \(-350\,\hbox {m s}^{-1}\) (color-coded in blue). Next to particles that are in contact with the wall, a small fraction of particles experience the rebound effect and their velocity reach nearly \(50\,\hbox {m s}^{-1}\) (color-coded in red), before being counteracted by the bulk movement of the droplet. The main bulk of the particles has a velocity range from 0 to \(-200\,\hbox {m s}^{-1}\), which corresponds to the in-flight velocity of the droplet and solidified particles. This observed peak of the velocity at the moment of impact is assumed to be the result of the sudden difference in velocity due to solidification of the fluid and the subsequent increase of local density and jump in pressure. However, this does not necessarily imply an un-physical result, but on the contrary it might actually capture the real conditions even more accurately than the Eulerian method in Ansys Fluent. Note that the investigation of minimum and maximum velocities is difficult to compare quantitatively and are only discussed in order to give better insight into the dynamics of the process for each simulation method. While the actual values differ slightly, the overall trends visible in the minimum and maximum velocities are very similar for both methods. Axial velocity at the moment of impact; the in-flight velocity of the droplet is directed in negative y-direction. (Color figure online) Table 3 Material properties of ceramic droplet and substrate. Note that the latent heat of melting and the specific heat capacity are included in the specific enthalpy Simulation with substrate We now investigate the spread factor of droplet impacts given different initial diameters. For this, we modify previous material properties to be temperature-dependent and to also take into account the latent heat of melting. In addition, we explicitly discretize the substrate and also introduce temperature-dependent material parameters. The modified material properties are summarized in Table 3, with the temperature dependent material properties being shown in Fig. 13. Note that only modified material properties are listed and that the cohesion \(\gamma ^{\hbox {f}}\) and adhesion factors \(\gamma ^{\hbox {b}}\) were adjusted to \(\gamma ^{\hbox {f}} = 300\,\hbox {N m}^{-1}\) and \(\gamma ^{\hbox {b}} = 300\,\hbox {N m}^{-1}\). We simulate droplets of three different diameters: \(30\,\upmu \hbox {m}\), \(45\,\upmu \hbox {m}\) and \(62\,\upmu \hbox {m}\). The droplets and substrate are discretized with a particle radius of \(0.4\,\upmu \hbox {m}\). This resulted in a total of 1,908,591, 1,972,736, 2,119,910 particles for the three different droplet diameters, respectively. The computation time required to solve the simulation from the smallest to the largest droplet diameter was about 21, 51 and 53 min, respectively. The large number of particles in the simulation is mostly due to the discretization of the substrate, which may be further optimized in future work to reduce the computation time. The splat shapes of these droplets are shown in Fig. 11. Splat shapes of various initial droplet sizes after \(2\,\upmu \hbox {s}\) of simulated time. From left to right: \(62\,\upmu \hbox {m}\), \(45\,\upmu \hbox {m}\), \(30\,\upmu \hbox {m}\). The black circles denote the measured splat diameter. Best viewed zoomed in on the digital version For all splats, movement has ceased after \(2\,\upmu \hbox {s}\) of simulated time and a splat with splashes has formed, a process which is described in the literature for sufficiently large impact velocities [44]. The only remaining process is the solidification by heat conduction into the substrate. The speed of solidification is well-known to be governed by the thermal contact of droplet and substrate. This thermal contact is naturally modeled by our adhesion method which effectively governs the average distance of particles from the surface of the substrate and the particle density on the surface of the substrate. Higher adhesion values would result in a better thermal contact, while smaller values would inhibit thermal contact, which may be interpreted as describing the surface roughness of the sprayed surface on a macro-scale. Comparison of spread factors of various droplet sizes with maximum spread factors according to Passandideh-Fard et al. [45] Temperature-dependent material properties of ceramic droplet and substrate Ray-traced rendering of the droplet impact dynamic simulated with SPH The maximum spread factor \(\xi _{\hbox {max}} = \frac{D_{\hbox {max}}}{D_0}\) of the different initial droplet sizes \(D_0\) equal to 30, 45 and \(62\,\upmu \hbox {m}\) calculated by the SPH solver are shown in Fig. 12. These droplet diameters correspond to non-dimensional Reynolds numbers of 431, 646 and 891 for Re = \(\frac{\rho V_0 D_0}{\mu }\), and Weber numbers 5,925, 8,888 and 12,245 for We = \(\frac{\rho {V_0}^2 D_0}{\sigma }\). According to Passandideh-Fard et al. [45], since the Weber numbers are much larger than the Reynolds numbers, capillary effects can be neglected. Therefore, the maximum spread factor can be approximated as 0.5 Re\(^{0.25}\). The maximal diameter of our simulations \(D_{\hbox {max}}\) was evaluated by hand to \(60\,\upmu \hbox {m}\), \(100\,\upmu \hbox {m}\) and \(142\,\upmu \hbox {m}\), respectively. The corresponding circles are shown in black in Fig. 11. The results obtained for the maximum spread factor with the SPH method show a good correlation (\(R^2\) = 0.9565) with the analytical results approximated by Passandideh-Fard et al. [45] as well as with the results predicted by Farrokhpanah et al. [23] (\(R^2\) = 0.9960). For a direct comparison of spread factors between our results and the analytical results see Fig. 12. We conclude that our model is able to reproduce the droplet spread factor increase that is known to occur with an increase in initial droplet diameter. In the previous section, we compared the results of our SPH simulation against a simulation using the commercial tool Ansys Fluent. While in the real process, of course, the partial melting of the droplet and the heat transfer cannot be neglected, including the latent heat of melting and solidification, in this work, the main goal was to compare the performance of the fundamentally different numerical methods at the conditions present at the droplet impact in thermal spraying. It is therefore considered justified to initially keep the model as simple as possible, also neglecting most nonlinearities like temperature-dependent material parameters for the comparison. We were able to use identical physical models and parameters for all phenomena, except for the corrective terms employed to improve the accuracy of the SPH simulation. The corrective factors were adjusted to be as small as possible in order to closely match the droplet dimensions computed by Ansys Fluent. From this, we were able to obtain excellent results and good agreement with the Ansys Fluent simulation. In terms of computational efficiency, our proposed SPH method also compares very favorably to Ansys Fluent. In our SPH method, we used a total of 230k particles, while Ansys Fluent used a total number of 330k mesh cells. While it may seem at first that the discretization using SPH is coarser, the actual discretization density in the region of interest, i.e., in the droplet, was \(\sim 2.8^3 \approx 22\) times higher than the discretization density used by Ansys Fluent, whose mesh edge-length was \(2.25\,\upmu \hbox {m}\) which equated to 2.8 times the particle diameter of \(0.8\,\upmu \hbox {m}\). At the same time, our SPH method was able to finish the simulation in roughly 5 min, while Ansys Fluent required roughly 20 min. This is a remarkable result, as it allows a significant refinement in the region of interest, while reducing the simulation time by a factor of 4. All the more interesting are the scaling implications for the simulation of multiple droplet impacts. The increased performance allows for faster iteration times when simulating multiple droplets as well as significantly more accurate resolution of gaps in the coating on the scale of \(< 1\,\upmu \hbox {m}\), than was possible before by Bobzin et al. [17]. While a large part of the SPH code is already well-optimized, there are also some simple optimizations in reach that could further improve the SPH simulation performance. We have also shown in our evaluations that the selection of the liquid fraction for contouring of the ceramic phase has a very significant impact on droplet diameter, but especially the droplet height. This has to do with the dispersion of the liquid surface that is ever-present for FVM-simulations using the VOF approach and adds an additional restriction on the mesh resolution. Using our SPH method, we were able to completely avoid the issue of dispersion and obtain a high-resolution simulation with a clearly defined surface. A ray-traced rendering of a surface reconstruction of the SPH particle data is shown in Fig. 14. Finally, we extended our SPH simulations to explicitly discretize the substrate and to take temperature-dependent material parameters into account. Using this setup, we simulated three droplet impacts of different initial diameters and were able to show good agreement of the simulated spread factor when compared with the analytical expression proposed by Passandideh-Fard et al. [45]. In this article, we have shown that it is possible to simulate a molten droplet impact of the thermal spray process using nearly identical physical parameters in an SPH discretization as well as an FVM (finite volume method) discretization in Ansys Fluent. We were able to perform a quantitative analysis of the simulations by considering droplet height, diameter and velocity distribution over time. All of these showed good agreements, while the few dissimilarities were isolated and explained. We introduced a novel SPH model which uses implicit integration for all forces except surface tension. Because of this, our simulations remain stable for a wide range of large time steps. We were also able to show that our SPH method is a very efficient and accurate alternative to the commercial FVM method of Ansys Fluent. Our SPH method is able to have a higher discretization density in the region of interest while only requiring a quarter of the simulation time. As a next step, we can build upon this work by considering multiple droplets of varying size and velocity. When considering multiple droplets, the gaps in the coating may also be evaluated to a higher degree of accuracy than was previously possible. A further extension could enable the simulation of multiple, only partially melted droplets of both varying size as well as varying ratio of solid material at the core. Furthermore, the consideration of rough surfaces and resulting contact angles could also be interesting for future work. Davies J (2004) Handbook of thermal spray technology. ASM International, Materials Park Pawlowski L (2008) Science and engineering of thermal, 2nd edn. Wiley, Hoboken Vardelle A, Moreau C, Themelis NJ, Chazelas C (2014) A perspective on plasma spray technology. Plasma Chem Plasma Process 35(3):491–509. https://doi.org/10.1007/s11090-014-9600-y Goutier S, Fauchais P (2011) Last developments in diagnostics to follow splats formation during plasma spraying. J Phys Conf Ser 275:012003. https://doi.org/10.1088/1742-6596/275/1/012003 Vardelle M, Vardelle A, Leger AC, Fauchais P, Gobin D (1995) Influence of particle parameters at impact on splat formation and solidification in plasma spraying processes. J Therm Spray Technol 4(1):50–58. https://doi.org/10.1007/BF02648528 Ghafouri-Azar R, Mostaghimi J, Chandra S, Charmchi M (2003) A stochastic model to simulate the formation of a thermal spray coating. J Therm Spray Technol 12(1):53–69. https://doi.org/10.1361/105996303770348500 Chandra S, Fauchais P (2009) Formation of solid splats during thermal spray deposition. J Therm Spray Technol 18(2):148–180. https://doi.org/10.1007/s11666-009-9294-5 Pasandidehfard M, Pershin V, Chandra S, Mostaghimi J (2002) Splat shapes in a thermal spray coating process: simulations and experiments. J Therm Spray Technol 11:206–217 Yang K, Liu M, Zhou K, Deng C (2012) Recent developments in the research of splat formation process in thermal spraying. J Mater. https://doi.org/10.1155/2013/260758 Bender J (2021) SPlisHSPlasH. https://github.com/InteractiveComputerGraphics/SPlisHSPlasH Borrell R, Lehmkuhl O, Castro J (2013) Parallelization strategy for the volume-of-fluid method on unstructured meshes. Procedia Eng. https://doi.org/10.1016/j.proeng.2013.08.003 Hu H, Argyropoulos S (1996) Mathematical modelling of solidification and melting: a review. Modell Simul Mater Sci Eng 4:371–396 Pasandideh-fard M, Chandra S, Mostaghimi J (2002) A three-dimensional model of droplet impact and solidification. Int J Heat Mass Transf 45:2229–2242 Zheng YZ, Li Q, Zheng ZH, Zhu JF, Cao PL (2014) Modeling the impact, flattening and solidification of a molten droplet on a solid substrate during plasma spraying. Appl Surf Sci 317:526–533. https://doi.org/10.1016/j.apsusc.2014.08.032 Bobzin K, Öte M, Knoch MA, Alkhasli I, Dokhanchi SR (2019) Modelling of particle impact using modified momentum source method in thermal spraying. IOP Conf Ser Mater Sci Eng 480:012003. https://doi.org/10.1088/1757-899x/480/1/012003 Bobzin K, Wietheger W, Heinemann H, Alkhasli I (2021) Simulation of multiple particle impacts in plasma spraying. In: Reisgen U, Drummer D, Marschall H (eds) Enhanced material, parts optimization and process intensification. Springer International Publishing, Cham, pp 91–100 Bobzin K, Wietheger W, Heinemann H, Wolf F (2021) Simulation of thermally sprayed coating properties considering the splat boundaries. IOP Conf Ser Mater Sci Eng 1147:012026. https://doi.org/10.1088/1757-899x/1147/1/012026 Gingold RA, Monaghan J (1977) Smoothed particle hydrodynamics: theory and application to non-spherical stars. Mon Not R Astron Soc 181:375–389. https://doi.org/10.1093/mnras/181.3.375 Lucy LB (1977) A numerical approach to the testing of the fission hypothesis. Astron J 82:1013–1024. https://doi.org/10.1086/112164 Fang HS, Bao K, Wei JA, Zhang H, Wu EH, Zheng LL (2009) Simulations of droplet spreading and solidification using an improved SPH model. Numer Heat Transf Part A Appl 55(2):124–143. https://doi.org/10.1080/10407780802603139 Zhang M, Zhang H, Zheng L (2008) Numerical investigation of substrate melting and deformation during thermal spray coating by SPH method. Plasma Chem Plasma Process 29(1):55–68. https://doi.org/10.1007/s11090-008-9158-7 Farrokhpanah A, Bussmann M, Mostaghimi J (2017) New smoothed particle hydrodynamics (SPH) formulation for modeling heat conduction with solidification and melting. Numer Heat Transf Part B Fundam 71(4):299–312. https://doi.org/10.1080/10407790.2017.1293972 Farrokhpanah A, Mostaghimi J, Bussmann M (2021) Nonlinear enthalpy transformation for transient convective phase change in smoothed particle hydrodynamics (SPH). Numer Heat Transf Part B Fundam 79(5–6):255–277. https://doi.org/10.1080/10407790.2021.1929295 Abubakar AA, Arif AFM (2019) A hybrid computational approach for modeling thermal spray deposition. Surf Coat Technol 362:311–327. https://doi.org/10.1016/j.surfcoat.2019.02.010 Zhu Z, Kamnis S, Gu S (2015) Numerical study of molten and semi-molten ceramic impingement by using coupled Eulerian and Lagrangian method. Acta Mater 90:77–87. https://doi.org/10.1016/j.actamat.2015.02.010 Komen H, Shigeta M, Tanaka M (2018) Numerical simulation of molten metal droplet transfer and weld pool convection during gas metal arc welding using incompressible smoothed particle hydrodynamics method. Int J Heat Mass Transf 121:978–985. https://doi.org/10.1016/j.ijheatmasstransfer.2018.01.059 Ito M, Nishio Y, Izawa S, Fukunishi Y, Shigeta M (2015) Numerical simulation of joining process in a TIG welding system using incompressible SPH method. Q J Jpn Weld Soc 33(2):34s–38s. https://doi.org/10.2207/qjjws.33.34s Trautmann M, Hertel M, Füssel U (2018) Numerical simulation of weld pool dynamics using a SPH approach. Weld World 62(5):1013–1020. https://doi.org/10.1007/s40194-018-0615-5 Price DJ (2010) Smoothed particle magnetohydrodynamics—iv. Using the vector potential. Mon Not R Astron Soc 401(3):1475–1499. https://doi.org/10.1111/j.1365-2966.2009.15763.x Koschier D, Bender J, Solenthaler B, Teschner M (2019) Smoothed particle hydrodynamics techniques for the physics based simulation of fluids and solids. In: EUROGRAPHICS 2019 tutorials. Eurographics Association Bender J, Koschier D (2015) Divergence-free smoothed particle hydrodynamics. In: ACM SIGGRAPH/eurographics symposium on computer animation, pp 1–9 Monaghan J (2012) Smoothed particle hydrodynamics and its diverse applications. Annu Rev Fluid Mech 44(1):323–346. https://doi.org/10.1146/annurev-fluid-120710-101220 Weiler M, Koschier D, Brand M, Bender J (2018) A physically consistent implicit viscosity solver for sph fluids. Comput Graph Forum 37(2): 145–155 Brackbill J, Kothe D, Zemach C (1992) A continuum method for modeling surface tension. J Comput Phys 100(2):335–354. https://doi.org/10.1016/0021-9991(92)90240-y Müller M, Charypar D, Gross M (2003) Particle-based fluid simulation for interactive applications. In: ACM SIGGRAPH/eurographics symposium on computer animation, pp 154–159. http://portal.acm.org/citation.cfm?id=846298 Morris JP (2000) Simulating surface tension with smoothed particle hydrodynamics. Int J Numer Methods Fluids 33(3):333–353. 10.1002/1097-0363(20000615)33:3\(<\)333::AID-FLD11\(>\)3.0.CO;2-7 Brent AD, Voller VR, Reid KJ (1988) Enthalpy-porosity technique for modeling convection–diffusion phase change: Application to the melting of a pure metal. Numer Heat Transf 13(3):297–318. https://doi.org/10.1080/10407788808913615 Voller V, Brent A, Prakash C (1990) Modelling the mushy region in a binary alloy. Appl Math Model 14(6):320–326. https://doi.org/10.1016/0307-904x(90)90084-i Monaghan J (1989) On the problem of penetration in particle methods. J Comput Phys 82(1):1–15. https://doi.org/10.1016/0021-9991(89)90032-6 Monaghan J (2000) SPH without a tensile instability. J Comput Phys 159(2):290–311. https://doi.org/10.1006/jcph.2000.6439 Becker M, Teschner M (2007) Weakly compressible SPH for free surface flows. In: ACM SIGGRAPH/eurographics symposium on computer animation, pp 1–8. http://portal.acm.org/citation.cfm?id=1272690.1272719%5Cn and http://dl.acm.org/citation.cfm?id=1272719 Brookshaw L (1985) A method of calculating radiative heat diffusion in particle simulations. Publ Astron Soc Aust 6(2):207–210. https://doi.org/10.1017/S1323358000018117 Akinci N, Ihmsen M, Akinci G, Solenthaler B, Teschner M (2012) Versatile rigid-fluid coupling for incompressible SPH. ACM Trans Graph 31(4):1–8. https://doi.org/10.1145/2185520.2335413 Mostaghimi J, Chandra S (2018) Droplet impact and solidification in plasma spraying. Springer International Publishing, Cham, pp 2967–3008. https://doi.org/10.1007/978-3-319-26695-4_78 PasandidehFard M, Qiao YM, Chandra S, Mostaghimi J (1996) Capillary effects during droplet impact on a solid surface. Phys Fluids 8(3):650–659. https://doi.org/10.1063/1.868850 The presented investigations were carried out at RWTH Aachen University within the framework of the Collaborative Research Centre SFB1120-236616214 "Bauteilpräzision durch Beherrschung von Schmelze und Erstarrung in Produktionsprozessen" and funded by the Deutsche Forschungsgemeinschaft e.V. (DFG, German Research Foundation). The sponsorship and support is gratefully acknowledged. Simulations were performed with computing resources granted by RWTH Aachen University under project rwth0570. Open Access funding enabled and organized by Projekt DEAL. Stefan Rhys Jeske and Kevin Jasutyn contributed equally to this work. Visual Computing Institute - Computer Animation, RWTH Aachen University, Aachen, Germany Stefan Rhys Jeske & Jan Bender Surface Engineering Institute, RWTH Aachen University, Aachen, Germany Kirsten Bobzin, Hendrik Heinemann & Kevin Jasutyn Welding and Joining Institute, RWTH Aachen University, Aachen, Germany Marek Simon, Oleg Mokrov, Rahul Sharma & Uwe Reisgen Stefan Rhys Jeske Jan Bender Kirsten Bobzin Hendrik Heinemann Kevin Jasutyn Marek Simon Oleg Mokrov Uwe Reisgen Correspondence to Stefan Rhys Jeske. On behalf of all authors, the corresponding author states that there is no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Jeske, S.R., Bender, J., Bobzin, K. et al. Application and benchmark of SPH for modeling the impact in thermal spraying. Comp. Part. Mech. 9, 1137–1152 (2022). https://doi.org/10.1007/s40571-022-00459-9 Revised: 29 November 2021 Issue Date: November 2022 Particle impact Heat transfer and solidification Molecular dynamics and particle methods Navier–Stokes equations for incompressible viscous fluids
CommonCrawl
Discrete & Continuous Dynamical Systems - B February 2005 , Volume 5 , Issue 1 A special issue on Recent Advances in Vortex Dynamics and Turbulence Guest Editors: Chjan C. Lim and Ka Kit Tung Introduction: Recent advances in vortex dynamics and turbulence Chjan C. Lim and Ka Kit Tung 2005, 5(1): i-i doi: 10.3934/dcdsb.2005.5.1i +[Abstract](1903) +[PDF](27.2KB) As the subject of vortex dynamics and its applications to two-dimensional fluid flows mature, we have witnessed an explosion in the number of research works in the field. It is the aim of this special issue to collate some of this recent advance and at the same time point to several new directions. One of these new directions is the re-entry of equilibrium statistical mechanics into the field. Many years after the classical papers of Onsager, Kraichnan, Leith, Montgomery, Lundgren, Pointin and Chorin, we are at a point where the Kraichnan, Batchelor and Leith energy-enstrophy theories in two-dimensional turbulence have been studied from new analytical and numerical points of views. A second emerging direction is in the use of a particular type of large-scale scientific computing in vortex statistics, namely Monte-Carlo simulations of vortex gas in the plane and sphere which explore an extended range of parameter values such as temperature and chemical potentials. Chjan C. Lim, Ka Kit Tung. Introduction: Recent advances in vortex dynamics and turbulence. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): i-i. doi: 10.3934/dcdsb.2005.5.1i. Statistical equilibrium of the Coulomb/vortex gas on the unbounded 2-dimensional plane Syed M. Assad and Chjan C. Lim 2005, 5(1): 1-14 doi: 10.3934/dcdsb.2005.5.1 +[Abstract](1606) +[PDF](403.0KB) This paper presents the statistical equilibrium distributions of single-species vortex gas and cylindrical electron plasmas on the unbounded plane obtained by Monte Carlo simulations. We present detailed numerical evidence that at high values of $\beta >0$ and $\mu >0$, where $\beta $ is the inverse temperature and $\mu $ is the Lagrange multiplier associated with the conservation of the moment of vorticity, the equilibrium vortex gas distribution is centered about a regular crystalline distribution with very low variance. This equilibrium crystalline structure has the form of several concentric nearly regular polygons within a bounding circle of radius $R.$ When $\beta$ ~ $O(1)$, the mean vortex distributions have nearly uniform vortex density inside a circular disk of radius $R.$ In all the simulations, the radius $R=\sqrt{\beta \Omega /(2\mu )}$ where $\Omega $ is the total vorticity of the point vortex gas or number of identical point charges. Using a continuous vorticity density model and assuming that the equilibrium distribution is a uniform one within a bounding circle of radius $R$, we show that the most probable value of $R$ scales with inverse temperature $\beta >0$ and chemical potential $\mu >0$ as in $R=\sqrt{\beta \Omega /(2\mu )}.$ Syed M. Assad, Chjan C. Lim. Statistical equilibrium of the Coulomb\/vortex gas on the unbounded 2-dimensional plane. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 1-14. doi: 10.3934/dcdsb.2005.5.1. A generalized Poincaré-Birkhoff theorem with applications to coaxial vortex ring motion Denis Blackmore, Jyoti Champanerkar and Chengwen Wang 2005, 5(1): 15-33 doi: 10.3934/dcdsb.2005.5.15 +[Abstract](1901) +[PDF](211.4KB) A new generalization of the Poincaré-Birkhoff fixed point theorem applying to small perturbations of finite-dimensional, completely integrable Hamiltonian systems is formulated and proved. The motivation for this theorem is an extension of some recent results of Blackmore and Knio on the dynamics of three coaxial vortex rings in an ideal fluid. In particular, it is proved using KAM theory and this new fixed point theorem that if $n>3$ coaxial rings all having vortex strengths of the same sign are initially in certain positions sufficiently close to one another in a three-dimensional ideal fluid environment, their motion with respect to the center of vorticity exhibits invariant $(n-1)$-dimensional tori comprised of quasiperiodic orbits together with interspersed periodic trajectories. Denis Blackmore, Jyoti Champanerkar, Chengwen Wang. A generalized Poincaré-Birkhoff theorem with applications to coaxial vortex ring motion. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 15-33. doi: 10.3934/dcdsb.2005.5.15. Dynamics of a circular cylinder interacting with point vortices A. V. Borisov, I. S. Mamaev and S. M. Ramodanov The paper studies the system of a rigid body interacting dynamically with point vortices in a perfect fluid. For arbitrary value of vortex strengths and circulation around the cylinder the system is shown to be Hamiltonian (the corresponding Poisson bracket structure is rather complicated). We also reduced the number of degrees of freedom of the system by two using the reduction by symmetry technique and performed a thorough qualitative analysis of the integrable system of a cylinder interacting with one vortex. A. V. Borisov, I. S. Mamaev, S. M. Ramodanov. Dynamics of a circular cylinder interacting with point vortices. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 35-50. doi: 10.3934/dcdsb.2005.5.35. Reversible Hamiltonian Liapunov center theorem Claudio A. Buzzi and Jeroen S.W. Lamb We study the existence of periodic solutions in the neighbourhood of symmetric (partially) elliptic equilibria in purely reversible Hamiltonian vector fields. These are Hamiltonian vector fields with an involutory reversing symmetry $R$. We contrast the cases where $R$ acts symplectically and anti-symplectically. In case $R$ acts anti-symplectically, generically purely imaginary eigenvalues are isolated, and the equilibrium is contained in a local two-dimensional invariant manifold containing symmetric periodic solutions encircling the equilibrium point. In case $R$ acts symplectically, generically purely imaginary eigenvalues are doubly degenerate, and the equilibrium is contained in two two-dimensional invariant manifolds containing nonsymmetric periodic solutions encircling the equilibrium point. In addition, there exists a three-dimensional invariant surface containing a two-parameter family of symmetric periodic solutions. Claudio A. Buzzi, Jeroen S.W. Lamb. Reversible Hamiltonian Liapunov center theorem. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 51-66. doi: 10.3934/dcdsb.2005.5.51. Non-universal features of forced 2D turbulence in the energy and enstrophy ranges S. Danilov Analysis of energy spectra and fluxes of 2D forced incompressible turbulence in the energy range reveals marked departures from the $-5/3$ law and the idea of spectral locality. Departures from the locality could also be diagnosed in the enstrophy interval, and in the energy range of beta-plane turbulence. S. Danilov. Non-universal features of forced 2D turbulence in the energy and enstrophy ranges. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 67-78. doi: 10.3934/dcdsb.2005.5.67. On the double cascades of energy and enstrophy in two dimensional turbulence. Part 1. Theoretical formulation Eleftherios Gkioulekas and Ka Kit Tung 2005, 5(1): 79-102 doi: 10.3934/dcdsb.2005.5.79 +[Abstract](1859) +[PDF](224.9KB) The Kraichnan-Leith-Batchelor scenario of a dual cascade, consisting of an upscale pure energy cascade and a downscale pure enstrophy cascade, is an idealization valid only in an infi nite domain in the limit of in finite Reynolds number. In realistic situations there are double cascades of energy and enstrophy located both upscale and downscale of injection, as long as there are cascades. We outline the statistical theory governing the double cascades and predict the form of the energy spectrum. We show that in general the twin conservation of energy and enstrophy imply the presence of two constant fluxes in each inertial range. This gives rise to a more complicated energy spectrum, which cannot be predicted using dimensional arguments as in the classical theory. Eleftherios Gkioulekas, Ka Kit Tung. On the double cascades of energy and enstrophy in two dimensional turbulence. Part 1. Theoretical formulation. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 79-102. doi: 10.3934/dcdsb.2005.5.79. On the double cascades of energy and enstrophy in two dimensional turbulence. Part 2. Approach to the KLB limit and interpretation of experimental evidence 2005, 5(1): 103-124 doi: 10.3934/dcdsb.2005.5.103 +[Abstract](1796) +[PDF](198.6KB) This paper is concerned with three interrelated issues on our proposal of double cascades intended to serve as a more realistic theory of two-dimensional turbulence. We begin by examining the approach to the KLB limit. We present improved proofs of the result by Fjortoft. We also explain why in that limit the subleading downscale energy cascade and upscale enstrophy cascade are hidden in the energy spectrum. Then we review the experimental evidence from numerical simulations concerning the realizability of the energy and enstrophy cascade. The inverse energy cascade is found to be affected by the presense of a particular solution, and the downscale enstrophy cascade is not robust. In particular, while it is possible to have either the upscale range or the downscale range with suitable choice of dissipations, the dual cascade of KLB does not appear to be realizable, not even approximately. Finally, we amplify the hypothesis that the energy spectrum of the atmosphere reflects a combined downscale cascade of energy and enstrophy. The possibility of the downscale helicity cascade is also considered. Eleftherios Gkioulekas, Ka Kit Tung. On the double cascades of energy and enstrophy in two dimensional turbulence. Part 2. Approach to the KLB limit and interpretation of experimental evidence. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 103-124. doi: 10.3934/dcdsb.2005.5.103. The Dirichlet quotient of point vortex interactions on the surface of the sphere examined by Monte Carlo experiments Joseph Nebus The point-vortex system on the surface of the sphere is examined by Monte Carlo methods. The statistical equilibria found in the system when it is constrained to keep circulation zero (but without other explicit constraints on site values) are found to be self-regulating in a sense. While site strengths will grow without bound as the number of sweeps increases, the Dirichlet quotient, the ratio of enstrophy to energy, is found to converge rapidly to a finite nonzero value. This unlimited growth in site values remains controlled. The dependences of this quotient on the temperature and on the mesh size are examined. Joseph Nebus. The Dirichlet quotient of point vortex interactions on the surface of the sphere examined by Monte Carlo experiments. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 125-136. doi: 10.3934/dcdsb.2005.5.125. The constrained planar N-vortex problem: I. Integrability P.K. Newton, M. Ruith and E. Upchurch The Hamiltonian system governing $N$-interacting particles constrained to lie on a closed planar curve are derived. The problem is formulated in detail for the case of logarithmic (point-vortex) interactions. We show that when the curve is circular with radius $ R $, the system is completely integrable for all particle strengths $ \Gamma _ \beta $, with particle $ \Gamma _ \beta $ moving with frequency $ \omega _ \beta = (\Gamma - \Gamma _ \beta )/4 \pi R^2 $, where $ \Gamma = \sum^{N}_{\alpha=1} \Gamma _ \alpha $ is the sum of the strengths of all the particles. When all the particles have equal strength, they move periodically around the circle keeping their relative distances fixed. When not all the strengths are equal, two or more of the particles collide in finite time. The diffusion of a neutral particle (i.e. the problem of 1D mixing) is examined. On a circular curve, a neutral particle moves uniformly with frequency $ \Gamma / 4 \pi R^2 $. When the curve is not perfectly circular, for example when given a sinusoidal perturbation, or when the particles move on concentric circles with different radii, the particle dynamics is considerably more complex, as shown numerically from an examination of power spectra and collision diagrams. Thus, the circular constraint appears to be special in that it induces completely integrable dynamics. P.K. Newton, M. Ruith, E. Upchurch. The constrained planar N-vortex problem: I. Integrability. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 137-152. doi: 10.3934/dcdsb.2005.5.137. Theory and simulation of real and ideal magnetohydrodynamic turbulence John V. Shebalin 2005, 5(1): 153-174 doi: 10.3934/dcdsb.2005.5.153 +[Abstract](1766) +[PDF](1070.0KB) Incompressible, homogeneous magnetohydrodynamic (MHD) turbulence consists of fluctuating vorticity and magnetic fi elds, which are represented in terms of their Fourier coefficients. Here, a set of fi ve Fourier spectral transform method numerical simulations of two-dimensional (2-D) MHD turbulence on a $512^2$ grid is described. Each simulation is a numerically realized dynamical system consisting of Fourier modes associated with wave vectors $\mathbf{k}$, with integer components, such that $k = |\mathbf{k}| \le k_{max}$. The simulation set consists of one ideal (non-dissipative) case and four real (dissipative) cases. All fi ve runs had equivalent initial conditions. The dimensions of the dynamical systems associated with these cases are the numbers of independent real and imaginary parts of the Fourier modes. The ideal simulation has a dimension of $366104$, while each real simulation has a dimension of $411712$. The real runs vary in magnetic Prandtl number $P_M$, with $P_M \in {0.1, 0.25, 1, 4}$. In the results presented here, all runs have been taken to a simulation time of $t = 25$. Although ideal and real Fourier spectra are quite different at high $k$, they are similar at low values of $k$. Their low $k$ behavior indicates the existence of broken symmetry and coherent structure in real MHD turbulence, similar to what exists in ideal MHD turbulence. The value of $P_M$ strongly affects the ratio of kinetic to magnetic energy and energy dissipation (which is mostly ohmic). The relevance of these results to 3-D Navier-Stokes and MHD turbulence is discussed. John V. Shebalin. Theory and simulation of real and ideal magnetohydrodynamic turbulence. Discrete & Continuous Dynamical Systems - B, 2005, 5(1): 153-174. doi: 10.3934/dcdsb.2005.5.153. 2019 Impact Factor: 1.27
CommonCrawl
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 11 > Page 6620 Christoph Hitzenberger, Editor-in-Chief Subdiffuse scattering and absorption model for single fiber reflectance spectroscopy Anouk L. Post, Dirk J. Faber, Henricus J. C. M. Sterenborg, and Ton G. van Leeuwen Anouk L. Post,1,2,* Dirk J. Faber,1 Henricus J. C. M. Sterenborg,1,2 and Ton G. van Leeuwen1 1Amsterdam UMC, University of Amsterdam, Department of Biomedical Engineering and Physics, Cancer Center Amsterdam, Amsterdam Cardiovascular Sciences, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands 2The Netherlands Cancer Institute, Department of Surgery, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands *Corresponding author: [email protected] Anouk L. Post https://orcid.org/0000-0002-7936-4960 A Post D Faber H Sterenborg T van Leeuwen •https://doi.org/10.1364/BOE.402466 Anouk L. Post, Dirk J. Faber, Henricus J. C. M. Sterenborg, and Ton G. van Leeuwen, "Subdiffuse scattering and absorption model for single fiber reflectance spectroscopy," Biomed. Opt. Express 11, 6620-6633 (2020) Measurement of the reduced scattering coefficient of turbid media using single fiber reflectance spectroscopy: fiber diameter and phase function dependence (BOE) Measurement of tissue scattering properties using multi-diameter single fiber reflectance spectroscopy: in silico sensitivity analysis (BOE) In vivo quantification of the scattering properties of tissue using multi-diameter single fiber reflectance spectroscopy (BOE) Tissue Optics and Spectroscopy Absorption coefficient Diffuse reflectance Light propagation Reflectance spectroscopy Tissue optical properties Original Manuscript: July 29, 2020 Revised Manuscript: October 16, 2020 Manuscript Accepted: October 16, 2020 Suppl. Mat. (1) Equations (12) Single fiber reflectance (SFR) spectroscopy is a technique that is sensitive to small-scale changes in tissue. An additional benefit is that SFR measurements can be performed through endoscopes or biopsy needles. In SFR spectroscopy, a single fiber emits and collects light. Tissue optical properties can be extracted from SFR spectra and related to the disease state of tissue. However, the model currently used to extract optical properties was derived for tissues with modified Henyey-Greenstein phase functions only and is inadequate for other tissue phase functions. Here, we will present a model for SFR spectroscopy that provides accurate results for a large range of tissue phase functions, reduced scattering coefficients, and absorption coefficients. Our model predicts the reflectance with a median error of 5.6% compared to 19.3% for the currently used model. For two simulated tissue spectra, our model fit provides accurate results. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Reflectance spectroscopy techniques can provide rapid information related to the structural and biochemical composition of tissue. Single Fiber Reflectance (SFR) spectroscopy is a technique where light is emitted and collected through a single fiber, which is connected to a broadband light source and a spectrograph using a bifurcated fiber or a beam splitter. SFR measures the steady-state reflectance versus wavelength. Due to its small footprint, SFR measurements can be performed through endoscopes or biopsy needles [1–3]. SFR spectroscopy has been studied for medical diagnostics mainly in the field of oncology [2–8], but also in e.g. saturation monitoring [9] and orthopedics [1,10]. SFR spectroscopy is especially suitable to detect small-scale changes in tissue, due to its relatively small sampling volume, ∼(100 µm)3, and sensitivity to the tissue phase function. Since light is emitted and collected through the same fiber, photon path lengths are biased towards short pathlengths and diffusion theory alone cannot describe the measured reflectance. SFR spectroscopy is a subdiffuse technique and, therefore, measurements are sensitive to the tissue phase function [11–13], which is related to the nanoscale architecture of tissue. When light propagation is described as a random walk, the phase function (p[θ]) describes the probability of a photon scattering at an angle θ relative to the photon's previous trajectory for each scattering event. With an appropriate model, optical properties can be extracted from SFR spectra, which can be related to the disease state of tissue. Until now, only a single model was available for SFR spectroscopy to relate the measured reflectance to scattering and absorption properties of tissue, which was derived by Kanick et al. [14,15]. The model of Kanick et al. was derived using Monte Carlo (MC) simulations with modified Henyey Greenstein (MHG) phase functions. Unfortunately, even though the MHG phase function describes the phase function of skin tissue [16], for many other tissues other phase functions have been measured, e.g. the two-term Henyey-Greenstein (TTHG) for liver [17,18], uterus [17], brain [19], breast [20] and muscle [21] and the Reynolds McCormick phase function (RMC, also known as Gegenbauer) for blood [22,23]. In practice, the phase function of a specific tissue under investigation is generally not known. Therefore, a model that is valid for the wide range of tissue phase functions is essential. Recently, we took the first steps towards the development of a new model for the SFR reflectance that provides accurate results for the wide range of tissue phase functions that can be encountered. We developed a new parameter (psb) to capture the phase function influence on the measured reflectance and we developed a model for the reflectance as a function of tissue scattering properties in the absence of absorption [24]. However, due to the presence of absorbers in tissue such as blood, fat, and water, it is essential that absorption is included in the model to enable accurate extraction of optical properties from tissue measurements. In this paper, we will develop a comprehensive model for the SFR reflectance as a function of both scattering and absorption properties of tissue. Furthermore, we will validate our model based on MC simulations for the wide range of reduced scattering coefficients, absorption coefficients, and phase functions that can be encountered in tissue. We will demonstrate that our comprehensive model for SFR spectroscopy predicts the measured reflectance substantially better than the model from Kanick et al. [14,15]. Finally, we will demonstrate the use of our model to determine optical properties from SFR measurements by simulating spectra of two tissue types. 2.1 SFR model Diffusion theory can accurately describe the reflectance when photon path lengths are larger than several transport mean free paths, where the transport mean free path is the inverse of the reduced scattering coefficient: 1/µs' and the reduced scattering coefficient equals µs'=µs(1-g1), where µs is the scattering coefficient and g1 is the scattering anisotropy (the first Legendre moment of the phase function). In SFR spectroscopy, photon path lengths are generally less than one transport mean free path since light is emitted and collected through the same fiber. In this so-called subdiffuse regime, the measured reflectance is the sum of photons undergoing a large number of scattering events (diffuse photons) and photons that undergo a few scattering events (semiballistic photons). Therefore, we model the reflectance as the sum of a semiballistic reflectance (RSFR,sb) and a diffuse reflectance (RSFR,dif): (1)$${R_{SFR}} = {R_{SFR,sb}} + {R_{SFR,dif}} = \left( {1 + X} \right) \cdot {R_{SFR,dif}}$$ where X is the ratio between the semiballistic and diffuse reflectance: (2)$$X = \;\frac{{{R_{SFR,sb}}}}{{{R_{SFR,dif}}}}$$ Semiballistic photons are defined here as detected photons that underwent a single backscattering event in combination with an arbitrary number of forward scattering events [25]. Diffuse photons undergo many scattering events before they are detected and, therefore, their direction is randomized. Thus diffuse reflectance does not depend on the details of the tissue phase function, but only on the scattering anisotropy g1. Semiballistic photons undergo only a few scattering events and, therefore, their direction is not fully randomized. The semiballistic contribution to the reflectance is thus more sensitive to the details of the phase function. To capture the influence of the phase function on the measured reflectance, we incorporate the previously derived phase function parameter psb [24] into our model. 2.1.1. Diffuse reflectance RSFR,dif For the contribution of diffuse photons to the reflectance, we make use of the diffusion approximation to the radiative transport equation, which describes the reflectance versus radial distance, R(ρ), for a pencil beam illumination. The implementation of the radiative transport equation for an overlapping source and detection fiber was derived by Faber et al., which models the diffuse reflectance as a single integral over the fiber diameter [26], where the integral of the diffuse reflectance versus radial distance is performed over the probability density function of distances over the fiber face p(ρ): (3)$${R_{dif}} = \frac{\pi }{4} \cdot {d^2} \cdot \mathop \smallint \nolimits_0^d R\left( \rho \right) \cdot p\left( {\rho ,d} \right)d\rho $$ (4)$$p({\rho ,d} )= \frac{{16\rho }}{{\pi {d^2}}}co{s^{ - 1}}\left( {\frac{\rho }{d}} \right) - \frac{{16}}{{\pi d}}{\left( {\frac{\rho }{d}} \right)^2}\sqrt {1 - {{\left( {\frac{\rho }{d}} \right)}^2}} $$ Equation (4) describes the distribution of distances between two randomly placed points on a disk with diameter d, which is a classic problem in the field of geometric probability [26,27]. For R(ρ) we will use the diffuse reflectance versus radial distance as a function of the reduced scattering coefficient and the absorption coefficient for a pencil beam illumination using the extended boundary condition as proposed by Farrell et al. [28]: (5)$$R\left( {\rho ,\mu _s^{\prime},{\mu _a}} \right) = \frac{{a'}}{{4\pi }}\left[ {{z_0}\left( {{\mu _{eff}} + \frac{1}{{{r_1}}}} \right)\frac{{{e^{ - {\mu _{eff}} \cdot {r_1}}}}}{{r_1^2}} + \left( {{z_0} + 2{z_b}} \right)\left( {{\mu _{eff}} + \frac{1}{{{r_2}}}} \right)\frac{{{e^{ - {\mu _{eff}} \cdot {r_2}}}}}{{r_2^2}}} \right]$$ where a'= µs'/(µs'+µa); z0=1/(µs'); µeff = √(3µaµs'); r1 = √(z02+ρ2) and r2 = √((z0+2zb)2+ρ2) ; zb=2A/(3µs'); and A is a parameter that depends on the refractive index mismatch between the fiber and the tissue [29]. The diffuse contribution to the reflectance Rdif can be rewritten as a function of both µs'd and µa/µs' (the full derivation is provided in the Supplemental Materials). Since the diffuse contribution to the reflectance, Rdif, is a function of both µs'd and µa/µs', it does not depend on µs', µa and d separately, but on µs'd and µad. This is demonstrated in Fig. 1, with calculations of Rdif using Eqs. (3)–(5) for different values of µs', µa and d, while holding µs'd and µad constant. Not all diffuse photons that reach the fiber face will be transported through the fiber to the detector. Only photons that arrive at an angle smaller than or equal to the acceptance angle of the fiber (θacc) will be detected, where θacc = asin(NA/n), NA is the fiber numerical aperture and n is the tissue refractive index. The fraction of photons arriving at the fiber face that is detected is described by the collection efficiency ηc [30]. Thus, the diffuse reflectance (RSFR,dif) equals the collection efficiency of the fiber (ηc) times the fraction of diffuse photons that reach the fiber face (Rdif): (6)$${R_{SFR,dif}} = {\eta _c} \cdot {R_{dif}}$$ We have previously shown that the collection efficiency for SFR spectroscopy equals ηc = (NA/n)2·1.11 [24]. Fig. 1. Diffuse reflectance values (Rdif) calculated using Eqs. (3)–(5), for different values of µs', µa, and d. The diffuse reflectance is a function of µs'd and µa/µs'. 2.1.2. Ratio semiballistic to diffuse photons X In Eq. (1), X is the ratio between the semiballistic and diffuse reflectance. Since semiballistic photons undergo only a few scattering events before detection, the semiballistic reflectance is sensitive to the tissue phase function p(θ). Therefore, the influence of the phase function on the semiballistic reflectance needs to be captured in the formula for X. Previously, we introduced the parameter psb to model the semiballistic contribution to the reflectance [24]: (7)$${p_{sb}} = \frac{{{p_b}\left( {1^\circ } \right)}}{{1 - {p_f}\left( {23^\circ } \right)}}$$ where pb(1°) is the probability of a photon undergoing a scattering event between 0 and 1 degrees in the backward direction. This probability equals the integral over the phase function over 1 degree in the backward direction: (8)$${p_b}({1^\circ } )= 2\mathrm{\pi}\mathop \smallint \nolimits_{ - 1^\circ }^0 p(\theta )sin\theta d\theta $$ pf(23°) is the probability of a photon undergoing a scattering event between 0 and 23 degrees in the forward direction: (9)$${p_f}\left( {23^\circ } \right) = 2{\rm{\pi }}\mathop \smallint \nolimits_0^{23^\circ } p\left( \theta \right)sin\theta d\theta $$ In short, the parameter psb describes the detection probability of photons that undergo a single backscatter event and multiple forward scattering events (photons that we defined as semiballistic). The integration angles were based on an analysis of MC simulations employing a large range of phase functions. To ensure the simulated reflectance was in the semiballistic regime, we had used a low reduced scattering coefficient (µs'd = 0.1) and no absorption in the MC simulations. The full derivation of psb and the choice of integration angles can be found in [24]. To derive a model for the ratio X in the presence of absorption, we start from the observation that the reflectance can be written as the product of the reflectance in the absence of absorption (R0) and the integral of the photon path length distribution (p(l)) weighted by the Beer-Lambert law: (10)$$R\left( {{\mu _a}} \right) = {R_0} \cdot \mathop \smallint \nolimits_0^\infty p\left( l \right){e^{ - {\mu _a}l}}dl$$ Equation (10) takes the form of a Laplace transform, with the absorption coefficient and path length as conjugate variables. In diffusion theory, the path length distribution p(l) depends on µs'l only. Using the scaling properties of the Laplace transform, this implies that the diffuse reflectance, Rdif(µa), will depend on the ratio µa/µs'. Since path lengths of semi-ballistic photons are much shorter than path lengths of diffuse photons, we assume that absorption will mainly influence the diffuse contribution to the reflectance. Therefore, we multiply the numerator of X in the absence of absorption (X0, as derived in [24] for an NA of 0.22) by a function that includes the term µa/µs'. (11)$$X = {X_0} \cdot f\left( {\frac{{{\mu _a}}}{{\mu _s^{\prime}}}} \right) = 3046{\left( {\frac{{{p_{sb}}}}{{{{\left( {\mu _s^{\prime}d} \right)}^2}}}} \right)^{0.748}} \cdot f\left( {\frac{{{\mu _a}}}{{\mu _s^{\prime}}}} \right)$$ 2.2 Monte Carlo simulations We will develop a model for f(µa/µs') based on MC simulations. Photons were launched at locations based on a uniform distribution across the fiber with an angle from a uniform angular distribution within the acceptance angle of the fiber θacc, where θacc = arcsin(NA/n). Photons were detected if they reached the fiber face at an angle within θacc. For all MC simulations, the NA was 0.22 and the refractive index was 1.35 for the tissue, 1.45 for the fiber face, and 1.00 for the medium above the tissue. We ran each simulation three times and had chosen the number of launched photons such that the standard deviation over the mean of the reflectance for each set of three simulations was less than approximately 2%. To derive the model for f(µa/µs') in Eq. (11) of our model, we performed simulations with 10 different phase functions (Table 1), chosen such that they cover a wide range of psb values, g1 values, and phase function types. With these phase functions, we performed simulations for a fiber diameter of 0.01 cm, with 20 values of µs' from 10 to 10000 cm−1 (equally spaced in 20 steps on a logarithmic scale) and 68 values of µa between 0.1 and 500 cm−1 (0.1 to 1 in steps of 0.1, 1 to 10 in steps of 1, 10 to 500 in steps of 10). Table 1. 10 phase functions used to derive the model. Three types of phase functions were used: Reynolds McCormick (RMC), two-term Henyey Greenstein (TTHG), and modified Henyey Greenstein (MHG). Per phase function, psb, and g1 values are given, as well as the parameters employed in the phase functions (α, gR, gf, gb, and gHG). To determine the accuracy of our model for a wide range of phase functions, we performed additional simulations using the following 207 phase functions: 15 modified Henyey Greenstein (MHG), 146 two-term Henyey-Greenstein (TTHG), and 46 Reynolds McCormick phase functions (RMC), employing the parameters specified in Table 2 and applying the restrictions g1≥0.5 and g2<0.9 to exclude biologically unreasonable phase functions. These restrictions resulted in phase functions with fairly equally distributed g1 values between 0.5 and 0.94. We performed simulations for a fiber diameter of 0.01 cm and all the combinations of µa = [1,5,10,30, 100] cm−1, µs' = [1,10, 50, 100] cm−1, as well as for a fiber diameter of 0.1 cm and all the combinations of µa = [1,5,10,30] cm−1, µs' = [1,10, 50, 100] cm−1, resulting in 36 sets of simulations, with 207 phase functions each. We tested the accuracy of our model for the combination of the set of 10 phase functions and the set with 207 phase functions (21052 simulations in total). We compared this to the accuracy obtained with the currently used model of Kanick et al. [14,15]. Table 2. Parameters employed in the selection of the 207 phase function used to determine the accuracy of our model. To demonstrate the use of our model to determine optical properties from SFR spectra, we performed MC simulations with optical properties that can be encountered in tissue and performed a fit on these spectra using our model. We modeled spectra of two tissues, from 400 to 900 nm in steps of 5 nm. We based the reduced scattering coefficients and absorption coefficients on the review by Jacques [31], which are summarized in Table 4. The first tissue was simulated to resemble skin, with µs' = 46(λ/500)−1.421 cm−1, a blood volume fraction of 0.01, an oxygen saturation level of 98%, and an MHG phase function where g1 increases linearly over the wavelengths from 0.7 to 0.77. The second tissue that we simulated resembles soft tissue, with µs' = 18.9(λ/500)−1.3 cm−1, a blood volume fraction of 0.05, an oxygen saturation level of 98%, and a TTHG with α = 0.95, gb = -0.05, and gf linearly increasing over the wavelengths from 0.85 to 0.9. For robust fit results, the number of fit parameters should be substantially smaller than the number of data points. Therefore, we performed MC simulations of spectra for two different fiber diameters. This implementation of SFR is referred to as multi-diameter SFR (MD-SFR) [3]. We performed simulations with fiber diameters of 300 and 600 microns. For a single SFR measurement, the number of data points (reflectance values) equals the number of wavelengths, i. Clearly, separate values for µs', µa, and psb per wavelength cannot be determined directly using a fit on a single SFR spectrum since the number of fit parameters would equal 3i. Therefore, we reduce the number of fit parameters by modeling the reduced scattering coefficient as µs' = a· (λ/500)−b, where a is the scattering amplitude and b the scattering slope. The absorption coefficient is modeled as a sum of absorption spectra of different absorbers present in the tissue times the concentration of these absorbers: µa,tissue = Σcj·µa,j(λ), e.g. for tissue with blood and water: µa,tissue = cblood·µa,blood(λ)+ cwater·µa,water (λ). Since the wavelength-dependence of psb is unknown, we fit a value of psb for each wavelength within the spectrum. For a single measured spectrum, the number of data points equals i and the number of fit parameters equals 2+i + j (2 for µs', i for psb, and j for the number of absorbers, respectively). Such an under-determined system will not provide robust fit results. The use of two separate measurements with two fibers of different diameters overcomes this issue. In that case, the number of data points will equal 2i­, while the number of fit parameters remains 2+i + j. Figure 2 shows the simulated reflectance as a function of µs'd for three different values of psb (colored lines) and two different values of µad. The dashed black line indicates the diffuse reflectance (RSFR,dif) for the corresponding µs'd and µad values. For high values of µs'd the reflectance equals the diffuse reflectance (dashed line). For lower values of µs'd there is an additional semiballistic contribution to the reflectance. With increasing psb and µad the fraction of semiballistic photons increases. Fig. 2. Reflectance vs. µs'd for three different phase functions and (a) µad = 0.1, (b) µad = 1.0. For lower values of µs'd, an additional semiballistic reflectance is added to the diffuse reflectance (RSFR,dif), which depends on psb, µs'd, and µad. The diffuse contribution to the reflectance depends on µs'd and µad only. To investigate whether the semiballistic contribution to the reflectance can be modeled as a function of µs'd, µad and psb, we compared the semiballistic reflectance for simulations with different values of µs' and µa, but the same values of µs'd, µad for all 207 phase functions (Table 2). Here, the semiballistic reflectance is the simulated reflectance minus RSFR,dif. Figure 3 shows the results for µs'd = 1.0 and µad = 1.0, as well as for µs'd = 0.1 and µad = 0.1 – using two different fiber diameters of 0.01 and 0.1 cm. The semiballistic reflectance values for the same µs'd, µad, and phase functions nearly overlap. Therefore, we will model the semiballistic contribution to the reflectance using µs'd, µad, and psb. The small differences in the reflectance can be explained by the fact that there is an uncertainty in our measured reflectance because we performed MC simulations such that the standard deviation over the mean of 3 simulations was less than 2%. It can be seen that for higher values of psb, there is some variation in the reflectance for the same psb values. This variation will likely result in a less accurate model for higher values of psb. Fig. 3. The semiballistic reflectance depends on µs'd µad and psb. Here, the semiballistic reflectance equals the simulated reflectance minus RSFR,dif. For visualization purposes, simulation results for every 3rd psb value are depicted here. Based on our simulations we searched for a model for f(µa/µs') based on visual inspection of the data. Figure 4 shows an example of the visualization of two sets of simulations, with µs'd = 0.1 and 0.144, with µad values of 0.001 to 1 and an MHG phase function. Based on the visual inspection we arrived at the following model: (12)$$f\left( {\frac{{{\mu _a}}}{{\mu _s^{\prime}}}} \right) = {e^{{b_1} \cdot {{\left( {\frac{{{\mu _a}}}{{\mu _s^{\prime}}}} \right)}^{{b_2}}}}}$$ Fig. 4. We searched for a model for f(µa/µs') based on visual inspection of the simulation results, where f(µa/µs') = X/X0 (Eq. (11)). Here we show an example for simulations with the MHG phase function with gHG = 0.8456 and α = 0.64, µad = 0.001-1 and two values of µs'd=0.100 and 0.144. Black line: fit of Eq. (12). For this specific set of simulations b1 = 1.25 and b2 = 0.57. For the full model, we determined b1,2 based on all MC simulations. We determined optimal values of b1,2 (Table 3) based on fitting our entire model (Eqs. (1)–(9) and (11) to the MC simulations for the set of 10 phase functions (Table 1). Figure 5 shows the simulated reflectance versus the modeled reflectance for the combination of the set of 10 phase functions and the set with 207 phase functions (using b1,2 from Table 3). For our model [Fig. 5(a)] the median error is 5.6% with a standard deviation of 8.8%. Figure 5(b) shows the results for the model of Kanick et al., where the median error is 19.3%, with a standard deviation of 43.2%. Fig. 5. Reflectance as predicted by the model (Rmodel) versus the reflectance obtained from the MC simulations (Rsimulations). The black line depicts a perfect prediction. Since many points overlap, colors are used to indicate the density of points (blue = low, yellow = high). (a) For our model, the median error is 5.6% with a standard deviation of 8.8%. (b) For the model of Kanick et al. the median error is 19.3% with a standard deviation of 43.2%. Table 3. Resulting parameters b1,2 based on fitting our entire model (Eqs. (1)–(9) and (11)) to the MC simulations for the set of 10 phase functions (Table 1). The 95% confidence intervals on these fit parameters are indicated. To obtain a better understanding of the relationship between the error in the reflectance and the optical properties, Fig. 6 depicts color maps of the median error in the reflectance versus µs'd, µad, and psb­ for the simulations with 10 phase functions. For a large range of optical properties, the median error is below 10%. Median errors increase up to 25% for µad values above 4, in combination with lower values of µs'd below 5. Also, median errors increase above 20% for high values of psb in combination with µs'd values above 3. We determined the median error for each type of phase function for the 36 sets of simulations with the 207 phase functions (Table 2). The median errors were 6.4% for the TTHG, 8.6% for the MHG, and 5.1% for the RMC. Fig. 6. Color maps of the relative median error in the reflectance versus µs'd, µad and psb­, for the set of simulations with 10 phase functions. To provide a clearer image, the median error values have been interpolated to a finer grid of µs'd, µad, and psb­ values. Note the vertical axis for µs'd starting at a value of 0.1, since that was the minimum value of µs'd used in the simulations. Figure 7 shows two sets of simulated tissue spectra and the fit results for µs', µa, and psb. Table 4 lists the resulting fit parameters related to blood (blood volume fraction and oxygen saturation) and scattering (scattering amplitude and slope). For both simulated tissues the fit results for µs' are very close to the simulated µs' values [Figs. 7(c), 7(h)]. The average differences in µs' over all wavelengths were 7% and 4% for tissue 1 and 2, respectively. The difference in the blood volume fraction was 20% for tissue 1 and 2% for tissue 2. The blood oxygenation was underestimated by 1% for tissue 1 and overestimated by 2% for tissue 2. The average differences in psb over all wavelengths were 9% and 4% for tissue 1 and 2, respectively [Figs. 7(e), 7(j)]. Comparing the fit result for µa [Figs. 7(d), 7(i)] and psb, it seems that these fit parameters are competing. The absorption dips of hemoglobin are visible in the fit results for psb. Fig. 7. Fit on simulated spectra for skin (a) and soft tissue (f) and their fit residuals (b) and (g), respectively. Fit results for µs' (c,h), µa (d,i) and psb (e,j) are shown here. Black lines indicate the simulated values, dashed red lines indicate the fit results. Input and fit parameters are in Table 4. Table 4. Input parameters and fit parameter results for simulated spectra for skin (Fig. 7(a-e)) and soft tissue (Fig. 7(f-j)) SFR spectroscopy is a spectroscopic technique especially suitable to detect small-scale changes in tissue. From the measured reflectance, tissue optical properties can be extracted and related to the disease state of tissue. However, the currently used model of Kanick et al. [14] to obtain optical properties from SFR measurements is limited to tissues with MHG phase functions. However, many tissues have different types of phase functions and often the phase function of a specific tissue under investigation is not known. Therefore, a model that is valid for a wide range of tissue phase functions is vital. Here, we developed a comprehensive model for the SFR reflectance as a function of both scattering and absorption properties of tissue that provides accurate results for a wide range of tissue phase functions. This new model predicts the measured reflectance substantially better compared to the model of Kanick et al. [14] (5.6 vs. 19.3% median error) and is valid for a wider range of phase functions. We modeled the diffuse contribution to the reflectance by solving the model for spatially resolved reflectance of Farrell et al. [28] for an overlapping source and detection fiber [26] by integrating over the probability density function of distances over the fiber face. For the diffuse reflectance in the absence of absorption, a closed form of Rdif has been derived [26], but this has not yet been done in the presence of absorption. If a closed form of Rdif is derived this could increase the speed of a fitting procedure for MDSFR spectra. Here, we showed that the diffuse reflectance depends on µs'd and µa/µs', and using MC simulations, we determined that the total reflectance can be described using µs'd, µad, and psb. We investigated the median error in the reflectance versus µs'd, µad, and psb. For a large range of optical properties, the median error was below 10%. Median errors increase above 20% from a µad value of 4, in combination with lower values of µs'd below 5. The main absorbers in tissue are blood, fat, and water. A µad value above 4 will not be reached as a result of the fat or water content of tissue. For pure fat the highest absorption coefficient from 400-1100 nm is 0.13 cm−1 [32], for a large fiber diameter of 0.1 cm, this would result in a µad value of 0.013. For pure water, assuming a large fiber diameter of 0.1 cm, µad only becomes larger than 4 for wavelengths above 1880 nm [33]. The increased errors for µad > 4 are relevant for the absorption by blood. The absorption spectrum of blood has one high peak from 400-450 nm and two lower peaks from 500-600 nm. To increase the range of blood volume fractions for which the model accurately predicts the absorption, a smaller fiber can be used. For a fiber diameter of 0.05 cm, a blood volume fraction of 28% will still result in µad < 4 for the spectrum above 450 nm [34]. For most tissue types, the blood volume fraction is in the order of 1-5% [31], therefore, the model will provide accurate results for the spectrum above 450. The blood absorption spectrum also has a high peak from 400-450 nm. Assuming a fiber diameter of 0.05 cm, µad will be below 4 from 400-450 nm for blood volume fractions up to 3%. Nevertheless, the entire absorption spectrum of blood is fitted to a measured SFR spectrum. Therefore, we expect that even for higher blood volume fractions the fit will provide accurate estimates of the blood volume fraction since the majority of the spectrum will be accurately modeled. This is demonstrated by our results for the simulated tissue with a blood volume fraction of 5%. Even though the residual of the fit was high for 400-450 nm (Fig. 7(g)), the fit result for the blood volume fraction was accurate (Table 4). We also found that median errors increase above 20% for high values of psb in combination with µs'd value s above 3. These higher errors can be explained by the fact that for higher values of psb there is more variation in the reflectance values obtained for different phase functions with similar psb values (Fig. 3). Higher errors for higher values of psb are thus an inevitable result when psb is used to model SFR measurements. Nevertheless, we showed previously that the variation in the reflectance was lowest for psb, compared to other parameters (σ [35], γ [36], δ [37] and RpNA [25]) that have been used to incorporate the phase function influence into models for subdiffuse reflectance [24]. Therefore, modeling the reflectance using psb will provide more accurate results than using σ, γ, δ and RpNA. To extract optical properties from SFR measurements, we used two different fiber diameters in an approach known as MDSFR. In our analysis in this paper, we assumed that both fibers sample a volume with the same optical properties. In clinical applications, when the two fibers are placed next to each other and tissue is inhomogeneous, this is not necessarily the case. Compared to DRS – where it is also assumed the tissue within the sampling volume is homogenous – MDSFR has the advantage that the sampling volume is much smaller and, therefore, the assumption of a homogenous sampling volume is more likely to hold. If two fibers are placed next to each other, the sampling volume will be shifted sideways with respect to the tissue surface by only a few hundred micrometers. Nevertheless, in MDSFR measurements the larger fiber will sample a deeper tissue volume, which is especially relevant for the absorption of light by the microvasculature. This can be accounted for by fitting separate parameters related to absorption for each fiber diameter [38]. For the MDSFR approach in this paper, we modeled the reduced scattering coefficient as µs'=a· (λ/500)−b and the absorption coefficient as a sum of absorption spectra of different absorbers present in the tissue times the concentration of these absorbers. Currently, the wavelength dependence of the phase function in general, and psb specifically, is not well-characterized. Therefore, a value of psb was fitted for each wavelength in the spectrum. It seems that µa and psb currently compete in the fit, leading to less accurate results. The robustness of the fit is expected to increase if a model for psb is used that decreases the number of fit parameters. We developed a model for SFR spectroscopy to describe the reflectance as a function of tissue scattering and absorption properties, which provides accurate results over a wide range of phase functions. The new model predicts the measured reflectance substantially better compared to the currently used model of Kanick et al. [14] which was developed for tissues with MHG phase functions only. The phase function of a specific tissue under investigation is generally not known. Therefore, a model that is valid for the wide range of phase functions that can be encountered in tissue is essential. KWF Kankerbestrijding (2014-7009); Nederlandse Organisatie voor Wetenschappelijk Onderzoek (iMIT-PROSPECT grant number 12707). The authors declare no conflicts of interest. See Supplement 1 for supporting content. 1. D. Piao, K. L. McKeirnan, N. Sultana, M. A. Breshears, A. Zhang, and K. E. Bartels, "Percutaneous single-fiber reflectance spectroscopy of canine intervertebral disc: Is there a potential for in situ probing of mineral degeneration?" Lasers Surg. Med. 46(6), 508–519 (2014). [CrossRef] 2. P. L. Stegehuis, L. S. F. Boogerd, A. Inderson, R. A. Veenendaal, P. van Gerven, B. A. Bonsing, J. Sven Mieog, A. Amelink, M. Veselic, H. Morreau, C. J. H. van de Velde, B. P. F. Lelieveldt, J. Dijkstra, D. J. Robinson, and A. L. Vahrmeijer, "Toward optical guidance during endoscopic ultrasound-guided fine needle aspirations of pancreatic masses using single fiber reflectance spectroscopy: a feasibility study," J. Biomed. Opt. 22(2), 024001 (2017). [CrossRef] 3. U. A. Gamm, M. Heijblom, D. Piras, F. M. Van den Engh, S. Manohar, W. Steenbergen, H. J. C. M. Sterenborg, D. J. Robinson, and A. Amelink, "In vivo determination of scattering properties of healthy and malignant breast tissue by use of multi-diameter-single fiber reflectance spectroscopy (MDSFR)," Proc. SPIE 8592, 85920T (2013). [CrossRef] 4. O. Bugter, J. A. Hardillo, R. J. Baatenburg de Jong, A. Amelink, and D. J. Robinson, "Optical pre-screening for laryngeal cancer using reflectance spectroscopy of the buccal mucosa," Biomed. Opt. Express 9(10), 4665 (2018). [CrossRef] 5. O. Bugter, M. C. W. Spaander, M. J. Bruno, R. J. Baatenburg De Jong, A. Amelink, and D. J. Robinson, "Optical detection of field cancerization in the buccal mucosa of patients with esophageal cancer," Clin. Transl. Gastroenterol. 9(4), e152 (2018). [CrossRef] 6. F. van Leeuwen-van Zaane, U. A. Gamm, P. B. A. A. van Driel, T. J. A. Snoeks, H. S. de Bruijn, A. van der Ploeg-van den Heuvel, I. M. Mol, C. W. G. M. Löwik, H. J. C. M. Sterenborg, A. Amelink, and D. J. Robinson, "In vivo quantification of the scattering properties of tissue using multi-diameter single fiber reflectance spectroscopy," Biomed. Opt. Express 4(5), 696 (2013). [CrossRef] 7. T. Sun, C. A. Davis, R. E. Hurst, J. W. Slaton, and D. Piao, "Orthotopic AY-27 rat bladder urothelial cell carcinoma model presented an elevated methemoglobin proportion in the increased total hemoglobin content when evaluated in vivo by single-fiber reflectance spectroscopy," Proc. SPIE 10038, 100380L (2017). [CrossRef] 8. S. Hariri Tabrizi, S. Mahmoud Reza Aghamiri, F. Farzaneh, A. Amelink, and H. J. C. M. Sterenborg, "Single fiber reflectance spectroscopy on cervical premalignancies: the potential for reduction of the number of unnecessary biopsies," J. Biomed. Opt. 18(1), 017002 (2013). [CrossRef] 9. L. Yu, Y. Wu, J. F. Dunn, and K. Murari, "In-vivo monitoring of tissue oxygen saturation in deep brain structures using a single fiber optical system," Biomed. Opt. Express 7(11), 4685 (2016). [CrossRef] 10. D. Piao, K. McKeirnan, Y. Jiang, M. A. Breshears, and K. E. Bartels, "A low-cost needle-based single-fiber reflectance spectroscopy method to probe scattering changes associated with mineralization in intervertebral discs in chondrodystrophoid canine species - A pilot study," Photonics Lasers Med. 1(2), 103–115 (2012). [CrossRef] 11. J. R. Mourant, J. Boyer, A. H. Hielscher, and I. J. Bigio, "Influence of the scattering phase function on light transport measurements in turbid media performed with small source-detector separations," Opt. Lett. 21(7), 546–548 (1996). [CrossRef] 12. T. Sun and D. Piao, "Simple analytical total diffuse reflectance over a reduced-scattering-pathlength scaled dimension of [10 −5, 10 −1 ] from a medium with HG scattering anisotropy," Appl. Opt. 58(33), 9279 (2019). [CrossRef] 13. S. C. Kanick, U. A. Gamm, M. Schouten, H. J. C. M. Sterenborg, D. J. Robinson, and A. Amelink, "Measurement of the reduced scattering coefficient of turbid media using single fiber reflectance spectroscopy: fiber diameter and phase function dependence," Biomed. Opt. Express 2(6), 1687–1702 (2011). [CrossRef] 14. S. C. Kanick, U. A. Gamm, H. J. C. M. Sterenborg, D. J. Robinson, and A. Amelink, "Method to quantitatively estimate wavelength-dependent scattering properties from multidiameter single fiber reflectance spectra measured in a turbid medium," Opt. Lett. 36(15), 2997–2999 (2011). [CrossRef] 15. S. C. Kanick, D. J. Robinson, H. J. C. M. Sterenborg, and A. Amelink, "Monte Carlo analysis of single fiber reflectance spectroscopy: photon path length and sampling depth," Phys. Med. Biol. 54(22), 6991–7008 (2009). [CrossRef] 16. S. L. Jacques, C. A. Alter, and S. A. Prahl, "Angular dependence of HeNe laser light scattering by human dermis," Lasers Life Sci. 1(4), 309–333 (1987). 17. R. Marchesini, A. Bertoni, S. Andreola, E. Melloni, and A. E. Sichirollo, "Extinction and absorption coefficients and scattering phase functions of human tissues in vitro," Appl. Opt. 28(12), 2318 (1989). [CrossRef] 18. P. Saccomandi, V. Vogel, B. Bazrafshan, E. Schena, T. J. Vogl, S. Silvestri, and W. Mäntele, "Estimation of anisotropy coefficient of swine pancreas, liver and muscle at 1064 nm based on goniometric technique," J. Biophotonics 8(5), 422–428 (2015). [CrossRef] 19. P. van der Zee, M. Essenpreis, D. T. Delpy, P. Van Der Zee, and M. Essenpreis, "Optical properties of brain tissue," in Photon Migration and Imaging in Random Media and Tissues, B. Chance and R. R. Alfano, eds. (1993), Vol. 1888, pp. 454–465. 20. N. Ghosh, S. K. Mohanty, S. K. Majumder, and P. K. Gupta, "Measurement of optical transport properties of normal and malignant human breast tissue," Appl. Opt. 40(1), 176–184 (2001). [CrossRef] 21. J. Zijp and J. ten Bosch, "Optical properties of bovine muscle tissue in vitro; a comparison of methods," Phys. Med. Biol. 43(10), 3065–3081 (1998). [CrossRef] 22. L. O. Reynolds and N. J. McCormick, "Approximate two-parameter phase function for light scattering," J. Opt. Soc. Am. 70(10), 1206–1212 (1980). [CrossRef] 23. A. N. Yaroslavsky, I. V. Yaroslavsky, T. Goldbach, and H. J. Schwarzmaier, "Optical properties of blood in the near-infrared spectral range," Proc. SPIE 2678, 314–324 (1996). [CrossRef] 24. A. L. Post, H. J. C. M. Sterenborg, F. G. Woltjer, T. G. van Leeuwen, and D. J. Faber, "Subdiffuse scattering model for single fiber reflectance spectroscopy," J. Biomed. Opt. 25(01), 1 (2020). [CrossRef] 25. A. L. Post, S. L. Jacques, H. J. C. M. Sterenborg, D. J. Faber, and T. G. van Leeuwen, "Modeling subdiffusive light scattering by incorporating the tissue phase function and detector numerical aperture," J. Biomed. Opt. 22(5), 050501 (2017). [CrossRef] 26. D. J. Faber, A. L. Post, H. J. C. M. Sterenborg, and T. G. Van Leeuwen, "Analytical model for diffuse reflectance in Single Fiber Reflectance Spectroscopy," Opt. Lett. 45(7), 2078–2081 (2020). [CrossRef] 27. H. Solomon, Geometric Probability (Society for Industrial and Applied Mathematics, 1978). 28. T. J. Farrell, M. S. Patterson, and B. Wilson, "A diffusion theory model of spatially resolved, steady-state diffuse reflectance for the noninvasive determination of tissue optical properties in vivo," Med. Phys. 19(4), 879–888 (1992). [CrossRef] 29. F. Martelli, S. Del Bianco, A. Ismaelli, and G. Zaccanti, Light Propagation through Biological Tissue and Other Diffusive Media: Theory, Solutions, and Software (SPIE, 2009). 30. P. R. Bargo, S. A. Prahl, and S. L. Jacques, "Collection efficiency of a single optical fiber in turbid media," Appl. Opt. 42(16), 3187–3197 (2003). [CrossRef] 31. S. L. Jacques, "Optical properties of biological tissues: a review," Phys. Med. Biol. 58(14), 5007–5008 (2013). [CrossRef] 32. R. L. P. van Veen, H. J. C. M. Sterenborg, A. Pifferi, A. Torricelli, and R. Cubeddu, "Determination of VIS- NIR absorption coefficients of mammalian fat, with time- and spatially resolved diffuse reflectance and transmission spectroscopy," in Biomedical Topical Meeting (OSA, 2004), p. SF4. 33. G. M. Hale and M. R. Querry, "Optical Constants of Water in the 200-nm to 200-µm Wavelength Region," Appl. Opt. 12(3), 555 (1973). [CrossRef] 34. S. Prahl, "Optical Absorption of Hemoglobin," https://omlc.org/spectra/hemoglobin/. 35. N. Bodenschatz, P. Krauter, A. Liemert, and A. Kienle, "Quantifying phase function influence in subdiffusively backscattered light," J. Biomed. Opt. 21(3), 035002 (2016). [CrossRef] 36. F. Bevilacqua and C. Depeursinge, "Monte Carlo study of diffuse reflectance at source–detector separations close to one transport mean free path," J. Opt. Soc. Am. A 16(12), 2935 (1999). [CrossRef] 37. P. Naglič, F. Pernuš, B. Likar, and M. Bürmen, "Estimation of optical properties from subdiffusive reflectance beyond the second similarity parameter γ," in Diffuse Optical Spectroscopy and Imaging VI, H. Dehghani and H. Wabnitz, eds. (2017), p. 1041205. 38. X. U. Zhang, P. van der Zee, I. Atzeni, D. J. Faber, T. G. van Leeuwen, and H. J. C. M. Sterenborg, "Multidiameter single-fiber reflectance spectroscopy of heavily pigmented skin: modeling the inhomogeneous distribution of melanin," J. Biomed. Opt. 24(12), 127001 (2019). [CrossRef] D. Piao, K. L. McKeirnan, N. Sultana, M. A. Breshears, A. Zhang, and K. E. Bartels, "Percutaneous single-fiber reflectance spectroscopy of canine intervertebral disc: Is there a potential for in situ probing of mineral degeneration?" Lasers Surg. Med. 46(6), 508–519 (2014). P. L. Stegehuis, L. S. F. Boogerd, A. Inderson, R. A. Veenendaal, P. van Gerven, B. A. Bonsing, J. Sven Mieog, A. Amelink, M. Veselic, H. Morreau, C. J. H. van de Velde, B. P. F. Lelieveldt, J. Dijkstra, D. J. Robinson, and A. L. Vahrmeijer, "Toward optical guidance during endoscopic ultrasound-guided fine needle aspirations of pancreatic masses using single fiber reflectance spectroscopy: a feasibility study," J. Biomed. Opt. 22(2), 024001 (2017). U. A. Gamm, M. Heijblom, D. Piras, F. M. Van den Engh, S. Manohar, W. Steenbergen, H. J. C. M. Sterenborg, D. J. Robinson, and A. Amelink, "In vivo determination of scattering properties of healthy and malignant breast tissue by use of multi-diameter-single fiber reflectance spectroscopy (MDSFR)," Proc. SPIE 8592, 85920T (2013). O. Bugter, J. A. Hardillo, R. J. Baatenburg de Jong, A. Amelink, and D. J. Robinson, "Optical pre-screening for laryngeal cancer using reflectance spectroscopy of the buccal mucosa," Biomed. Opt. Express 9(10), 4665 (2018). O. Bugter, M. C. W. Spaander, M. J. Bruno, R. J. Baatenburg De Jong, A. Amelink, and D. J. Robinson, "Optical detection of field cancerization in the buccal mucosa of patients with esophageal cancer," Clin. Transl. Gastroenterol. 9(4), e152 (2018). F. van Leeuwen-van Zaane, U. A. Gamm, P. B. A. A. van Driel, T. J. A. Snoeks, H. S. de Bruijn, A. van der Ploeg-van den Heuvel, I. M. Mol, C. W. G. M. Löwik, H. J. C. M. Sterenborg, A. Amelink, and D. J. Robinson, "In vivo quantification of the scattering properties of tissue using multi-diameter single fiber reflectance spectroscopy," Biomed. Opt. Express 4(5), 696 (2013). T. Sun, C. A. Davis, R. E. Hurst, J. W. Slaton, and D. Piao, "Orthotopic AY-27 rat bladder urothelial cell carcinoma model presented an elevated methemoglobin proportion in the increased total hemoglobin content when evaluated in vivo by single-fiber reflectance spectroscopy," Proc. SPIE 10038, 100380L (2017). S. Hariri Tabrizi, S. Mahmoud Reza Aghamiri, F. Farzaneh, A. Amelink, and H. J. C. M. Sterenborg, "Single fiber reflectance spectroscopy on cervical premalignancies: the potential for reduction of the number of unnecessary biopsies," J. Biomed. Opt. 18(1), 017002 (2013). L. Yu, Y. Wu, J. F. Dunn, and K. Murari, "In-vivo monitoring of tissue oxygen saturation in deep brain structures using a single fiber optical system," Biomed. Opt. Express 7(11), 4685 (2016). D. Piao, K. McKeirnan, Y. Jiang, M. A. Breshears, and K. E. Bartels, "A low-cost needle-based single-fiber reflectance spectroscopy method to probe scattering changes associated with mineralization in intervertebral discs in chondrodystrophoid canine species - A pilot study," Photonics Lasers Med. 1(2), 103–115 (2012). J. R. Mourant, J. Boyer, A. H. Hielscher, and I. J. Bigio, "Influence of the scattering phase function on light transport measurements in turbid media performed with small source-detector separations," Opt. Lett. 21(7), 546–548 (1996). T. Sun and D. Piao, "Simple analytical total diffuse reflectance over a reduced-scattering-pathlength scaled dimension of [10 −5, 10 −1 ] from a medium with HG scattering anisotropy," Appl. Opt. 58(33), 9279 (2019). S. C. Kanick, U. A. Gamm, M. Schouten, H. J. C. M. Sterenborg, D. J. Robinson, and A. Amelink, "Measurement of the reduced scattering coefficient of turbid media using single fiber reflectance spectroscopy: fiber diameter and phase function dependence," Biomed. Opt. Express 2(6), 1687–1702 (2011). S. C. Kanick, U. A. Gamm, H. J. C. M. Sterenborg, D. J. Robinson, and A. Amelink, "Method to quantitatively estimate wavelength-dependent scattering properties from multidiameter single fiber reflectance spectra measured in a turbid medium," Opt. Lett. 36(15), 2997–2999 (2011). S. C. Kanick, D. J. Robinson, H. J. C. M. Sterenborg, and A. Amelink, "Monte Carlo analysis of single fiber reflectance spectroscopy: photon path length and sampling depth," Phys. Med. Biol. 54(22), 6991–7008 (2009). S. L. Jacques, C. A. Alter, and S. A. Prahl, "Angular dependence of HeNe laser light scattering by human dermis," Lasers Life Sci. 1(4), 309–333 (1987). R. Marchesini, A. Bertoni, S. Andreola, E. Melloni, and A. E. Sichirollo, "Extinction and absorption coefficients and scattering phase functions of human tissues in vitro," Appl. Opt. 28(12), 2318 (1989). P. Saccomandi, V. Vogel, B. Bazrafshan, E. Schena, T. J. Vogl, S. Silvestri, and W. Mäntele, "Estimation of anisotropy coefficient of swine pancreas, liver and muscle at 1064 nm based on goniometric technique," J. Biophotonics 8(5), 422–428 (2015). P. van der Zee, M. Essenpreis, D. T. Delpy, P. Van Der Zee, and M. Essenpreis, "Optical properties of brain tissue," in Photon Migration and Imaging in Random Media and Tissues, B. Chance and R. R. Alfano, eds. (1993), Vol. 1888, pp. 454–465. N. Ghosh, S. K. Mohanty, S. K. Majumder, and P. K. Gupta, "Measurement of optical transport properties of normal and malignant human breast tissue," Appl. Opt. 40(1), 176–184 (2001). J. Zijp and J. ten Bosch, "Optical properties of bovine muscle tissue in vitro; a comparison of methods," Phys. Med. Biol. 43(10), 3065–3081 (1998). L. O. Reynolds and N. J. McCormick, "Approximate two-parameter phase function for light scattering," J. Opt. Soc. Am. 70(10), 1206–1212 (1980). A. N. Yaroslavsky, I. V. Yaroslavsky, T. Goldbach, and H. J. Schwarzmaier, "Optical properties of blood in the near-infrared spectral range," Proc. SPIE 2678, 314–324 (1996). A. L. Post, H. J. C. M. Sterenborg, F. G. Woltjer, T. G. van Leeuwen, and D. J. Faber, "Subdiffuse scattering model for single fiber reflectance spectroscopy," J. Biomed. Opt. 25(01), 1 (2020). A. L. Post, S. L. Jacques, H. J. C. M. Sterenborg, D. J. Faber, and T. G. van Leeuwen, "Modeling subdiffusive light scattering by incorporating the tissue phase function and detector numerical aperture," J. Biomed. Opt. 22(5), 050501 (2017). D. J. Faber, A. L. Post, H. J. C. M. Sterenborg, and T. G. Van Leeuwen, "Analytical model for diffuse reflectance in Single Fiber Reflectance Spectroscopy," Opt. Lett. 45(7), 2078–2081 (2020). H. Solomon, Geometric Probability (Society for Industrial and Applied Mathematics, 1978). T. J. Farrell, M. S. Patterson, and B. Wilson, "A diffusion theory model of spatially resolved, steady-state diffuse reflectance for the noninvasive determination of tissue optical properties in vivo," Med. Phys. 19(4), 879–888 (1992). F. Martelli, S. Del Bianco, A. Ismaelli, and G. Zaccanti, Light Propagation through Biological Tissue and Other Diffusive Media: Theory, Solutions, and Software (SPIE, 2009). P. R. Bargo, S. A. Prahl, and S. L. Jacques, "Collection efficiency of a single optical fiber in turbid media," Appl. Opt. 42(16), 3187–3197 (2003). S. L. Jacques, "Optical properties of biological tissues: a review," Phys. Med. Biol. 58(14), 5007–5008 (2013). R. L. P. van Veen, H. J. C. M. Sterenborg, A. Pifferi, A. Torricelli, and R. Cubeddu, "Determination of VIS- NIR absorption coefficients of mammalian fat, with time- and spatially resolved diffuse reflectance and transmission spectroscopy," in Biomedical Topical Meeting (OSA, 2004), p. SF4. G. M. Hale and M. R. Querry, "Optical Constants of Water in the 200-nm to 200-µm Wavelength Region," Appl. Opt. 12(3), 555 (1973). S. Prahl, "Optical Absorption of Hemoglobin," https://omlc.org/spectra/hemoglobin/ . N. Bodenschatz, P. Krauter, A. Liemert, and A. Kienle, "Quantifying phase function influence in subdiffusively backscattered light," J. Biomed. Opt. 21(3), 035002 (2016). F. Bevilacqua and C. Depeursinge, "Monte Carlo study of diffuse reflectance at source–detector separations close to one transport mean free path," J. Opt. Soc. Am. A 16(12), 2935 (1999). P. Naglič, F. Pernuš, B. Likar, and M. Bürmen, "Estimation of optical properties from subdiffusive reflectance beyond the second similarity parameter γ," in Diffuse Optical Spectroscopy and Imaging VI, H. Dehghani and H. Wabnitz, eds. (2017), p. 1041205. X. U. Zhang, P. van der Zee, I. Atzeni, D. J. Faber, T. G. van Leeuwen, and H. J. C. M. Sterenborg, "Multidiameter single-fiber reflectance spectroscopy of heavily pigmented skin: modeling the inhomogeneous distribution of melanin," J. Biomed. Opt. 24(12), 127001 (2019). Alter, C. A. Amelink, A. Andreola, S. Atzeni, I. Baatenburg de Jong, R. J. Bargo, P. R. Bartels, K. E. Bazrafshan, B. Bertoni, A. Bevilacqua, F. Bigio, I. J. Bodenschatz, N. Bonsing, B. A. Boogerd, L. S. F. Boyer, J. Breshears, M. A. Bruno, M. J. Bugter, O. Bürmen, M. Cubeddu, R. Davis, C. A. de Bruijn, H. S. Del Bianco, S. Delpy, D. T. Depeursinge, C. Dijkstra, J. Dunn, J. F. Essenpreis, M. Faber, D. J. Farrell, T. J. Farzaneh, F. Gamm, U. A. Ghosh, N. Goldbach, T. Gupta, P. K. Hale, G. M. Hardillo, J. A. Hariri Tabrizi, S. Heijblom, M. Hielscher, A. H. Hurst, R. E. Inderson, A. Ismaelli, A. Jacques, S. L. Jiang, Y. Kanick, S. C. Kienle, A. Krauter, P. Lelieveldt, B. P. F. Liemert, A. Likar, B. Löwik, C. W. G. M. Mahmoud Reza Aghamiri, S. Majumder, S. K. Manohar, S. Mäntele, W. Marchesini, R. Martelli, F. McCormick, N. J. McKeirnan, K. McKeirnan, K. L. Melloni, E. Mohanty, S. K. Mol, I. M. Morreau, H. Mourant, J. R. Murari, K. Naglic, P. Patterson, M. S. Pernuš, F. Piao, D. Pifferi, A. Piras, D. Post, A. L. Prahl, S. A. Querry, M. R. Reynolds, L. O. Robinson, D. J. Saccomandi, P. Schena, E. Schouten, M. Schwarzmaier, H. J. Sichirollo, A. E. Silvestri, S. Slaton, J. W. Snoeks, T. J. A. Solomon, H. Spaander, M. C. W. Steenbergen, W. Stegehuis, P. L. Sterenborg, H. J. C. M. Sultana, N. Sun, T. Sven Mieog, J. ten Bosch, J. Torricelli, A. Vahrmeijer, A. L. van de Velde, C. J. H. Van den Engh, F. M. van der Ploeg-van den Heuvel, A. van der Zee, P. van Driel, P. B. A. A. van Gerven, P. Van Leeuwen, T. G. van Leeuwen-van Zaane, F. van Veen, R. L. P. Veenendaal, R. A. Veselic, M. Vogel, V. Vogl, T. J. Wilson, B. Woltjer, F. G. Wu, Y. Yaroslavsky, A. N. Yaroslavsky, I. V. Yu, L. Zaccanti, G. Zhang, A. Zhang, X. U. Zijp, J. Biomed. Opt. Express (4) Clin. Transl. Gastroenterol. (1) J. Biomed. Opt. (6) J. Biophotonics (1) J. Opt. Soc. Am. A (1) Lasers Life Sci. (1) Lasers Surg. Med. (1) Med. Phys. (1) Photonics Lasers Med. (1) Phys. Med. Biol. (3) » Supplement 1 Derivation of formula in section 2.1.1 (1) R S F R = R S F R , s b + R S F R , d i f = ( 1 + X ) ⋅ R S F R , d i f (2) X = R S F R , s b R S F R , d i f (3) R d i f = π 4 ⋅ d 2 ⋅ ∫ 0 d ⁡ R ( ρ ) ⋅ p ( ρ , d ) d ρ (4) p ( ρ , d ) = 16 ρ π d 2 c o s − 1 ( ρ d ) − 16 π d ( ρ d ) 2 1 − ( ρ d ) 2 (5) R ( ρ , μ s ′ , μ a ) = a ′ 4 π [ z 0 ( μ e f f + 1 r 1 ) e − μ e f f ⋅ r 1 r 1 2 + ( z 0 + 2 z b ) ( μ e f f + 1 r 2 ) e − μ e f f ⋅ r 2 r 2 2 ] (6) R S F R , d i f = η c ⋅ R d i f (7) p s b = p b ( 1 ∘ ) 1 − p f ( 23 ∘ ) (8) p b ( 1 ∘ ) = 2 π ∫ − 1 ∘ 0 ⁡ p ( θ ) s i n θ d θ (9) p f ( 23 ∘ ) = 2 π ∫ 0 23 ∘ ⁡ p ( θ ) s i n θ d θ (10) R ( μ a ) = R 0 ⋅ ∫ 0 ∞ ⁡ p ( l ) e − μ a l d l (11) X = X 0 ⋅ f ( μ a μ s ′ ) = 3046 ( p s b ( μ s ′ d ) 2 ) 0.748 ⋅ f ( μ a μ s ′ ) (12) f ( μ a μ s ′ ) = e b 1 ⋅ ( μ a μ s ′ ) b 2 10 phase functions used to derive the model. Three types of phase functions were used: Reynolds McCormick (RMC), two-term Henyey Greenstein (TTHG), and modified Henyey Greenstein (MHG). Per phase function, psb, and g1 values are given, as well as the parameters employed in the phase functions (α, gR, gf, gb, and gHG). Phase function RMC 2.04·10−6 0.83 2.2233 0.5053 TTHG 2.52·10−5 0.52 0.9500 0.55 -0.15 MHG 1.19·10−4 0.65 0.7722 0.8456 Parameters employed in the selection of the 207 phase function used to determine the accuracy of our model. modified Henyey Greenstein 0.01≤gHG≤0.95, 10 linear steps 0.01≤α≤0.99, 10 linear steps two-term Henyey Greenstein 0.5≤ α ≤0.9, 3 linear steps 0.91≤ α ≤0.99, 5 linear steps 0.05≤gf≤0.95, 10 linear steps -0.95≤gb≤-0.05, 5 linear steps Reynolds McCormick 0.01≤ α ≤2.5, 10 linear steps 0.01≤gR≤0.95-0.2· α, 10 linear steps Resulting parameters b1,2 based on fitting our entire model (Eqs. (1)–(9) and (11)) to the MC simulations for the set of 10 phase functions (Table 1). The 95% confidence intervals on these fit parameters are indicated. 95% CI b1 1.17 (±0.004) Input parameters and fit parameter results for simulated spectra for skin (Fig. 7(a-e)) and soft tissue (Fig. 7(f-j)) Oxygen saturation 98.0% 96.6% 98.0% 99.8% Blood volume fraction 0.010 0.008 0.050 0.051 Scattering amplitude [cm−1] 46.0 42.6 18.9 19.0 Scatter slope 1.421 1.424 1.300 1.441
CommonCrawl
Fourier transform of 2D Gaussian 2 4b Thus the Fourier transform of a Gaussian function is another Gaussian func-tion. Requiring f(x) to integrate to 1 over R gives: B 1(s) = 1 √ 2π es 2 4b F 1(w) = B 1(iw) = 1 √ 2π e−w 2 4b • DCT is a Fourier-related transform similar to the DFT but using only real numbers • DCT is equivalent to DFT of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), where in some variants the input and/or output data are shifted by half a sampl The Fourier transform of a Gaussian function f(x)=e^(-ax^2) is given by F_x[e^(-ax^2)](k) = int_(-infty)^inftye^(-ax^2)e^(-2piikx)dx (1) = int_(-infty)^inftye^(-ax^2)[cos(2pikx)-isin(2pikx)]dx (2) = int_(-infty)^inftye^(-ax^2)cos(2pikx)dx-iint_(-infty)^inftye^(-ax^2)sin(2pikx)dx. (3) The second integrand is odd, so integration over a symmetrical range gives 0. The value of the first integral is given by Abramowitz and Stegun (1972, p. 302, equation 7.4.6), so. g(x)dx = 1 (i.e., normalized). The Fourier transform of the Gaussian function is given by: G(ω) = e−ω 2σ2 2. (4) Proof: We begin with differentiating the Gaussian function: dg(x) dx = − x σ2 g(x) (5) Next, applying the Fourier transform to both sides of (5) yields, iωG(ω) = 1 iσ2 dG(ω) dω (6) dG(ω) dω G(ω) = −ωσ2. (7 2D transform is very similar to it. The integrals are over two variables this time (and they're always from so I have left off the limits). The FT is defined as (1) and the inverse FT is . (2) The Gaussian function is special in this case too: its transform is a Gaussian. (3) The Fourier transform of a 2D delta function is a constant (4)� Fourier Transform--Gaussian -- from Wolfram MathWorl Projection along vertical lines The horizontal line through the 2D Fourier Transform equals the 1D Fourier Transform of the vertical projection. Since rotating the function rotates the Fourier Transform, the same is true for projections at all angles. F (u, 0) = F. 1D{R{f}(l, 0) Fourier transform can be generalized to higher dimensions. many signals are functions of 2D space defined over an x-y plane. Two-dimensional Fourier transform also has four different forms depending on whether the 2D signal is periodic and discrete. Aperiodic, continuous signal, continuous, aperiodic spectru Fourier Transform of Gaussian *. We wish to Fourier transform the Gaussian wave packet in (momentum) k-space to get in position space. The Fourier Transform formula is. Now we will transform the integral a few times to get to the standard definite integral of a Gaussian for which we know the answer. First Asked 8 years, 5 months ago. Active 2 years, 11 months ago. Viewed 58k times. 25. I would like to work out the Fourier transform of the Gaussian function. f ( x) = exp. ⁡. ( − n 2 ( x − m) 2) It seems likely that I will need to use differentiation and the shift rule at some point, but I can't seem to get the calculation to work Two-Dimensional Fourier Transfor 2D Fourier Transforms In 2D, for signals h (n; m) with N columns and M rows, the idea is exactly the same: ^ h (k; l) = N 1 X n =0 M m e i (! k n + l m) n; m h (n; m) = 1 NM N 1 X k =0 M l e i (! k n + l m) ^ k; l Often it is convenient to express frequency in vector notation with ~ k = (k; l) t, ~ n n; m,! kl k;! l and + m. 2D Fourier Basis Functions: Sinusoidal waveforms of different. Equation [9] states that the Fourier Transform of the Gaussian is the Gaussian! The Fourier Transform operation returns exactly what it started with. This is a very special result in Fourier Transform theory. The Fourier Transform of a scaled and shifted Gaussian can be found here Fourier Transform of Gaussian - University of California Have a look at the Fourier Transfrom of a Gaussian Signal. F x { e − a x 2 } (ω) = π a e − π 2 ω 2 a First, Gaussian Signal stays Gaussian under Fourier Transform. As you can see, the parameter which multiplies the variable is inverted = e−2π 2σ2f. Under the Fourier transform, the Gaussian function is mapped to another Gaussian function with a different width. If σ2 is large/small then h(t) is narrow/broad in the time domain. Notice how the width is inverted in the frequency domain. Thi Derivation of fourier transform of a 2D gaussian function. A 2D gaussian function is given by \eqref{eqaa} Note that \eqref{eqaa} can be written as, Given any 2D function , its fourier transform is given by. A 2D function is separable, if it can be written as . If and are the fourier transforms of and respectively, then, From \eqref{eqab}, \eqref{eqad}, and \eqref{eqf}, we derive the fourier. We implement an efficient method of computation of two dimensional Fourier-type integrals based on approximation of the integrand by Gaussian radial basis functions, which constitute a standard tool in approximation theory. As a result, we obtain a rapidly converging series expansion for the integrals, allowing for their accurate calculation. We apply this idea to the evaluation of diffraction integrals, used for the computation of the through-focus characteristics of an optical. Fourier transforms in 2D x, k - a new set of conjugate variables image processing with Fourier transforms. Fourier Transform Magnitude and Phase For any complex quantity, we can decompose f(t) and F ) into their magnitude and phase. f(t) can be written: f(t) = Mag{f(t)} exp[ j Phase{f(t)}] where Mag{f(t)}2 is called the intensity, I(t),* and Phase{f(t)} is called the temporal phase, (t. n-dimensional Fourier Transform 8.1 Space, the Final Frontier To quote Ron Bracewell from p. 119 of his book Two-Dimensional Imaging, In two dimensions phenomena are richer than in one dimension. True enough, working in two dimensions offers many new and rich possibilities. Contemporary applications of the Fourier transform are just as likely to come from problemsin two, three, and even. The Fourier transform of a function of time is a complex-valued function of frequency, whose magnitude (absolute value) represents the amount of that frequency present in the original function, and whose argument is the phase offset of the basic sinusoid in that frequency Engineering Tables/Fourier Transform Table 2 From Wikibooks, the open-content textbooks collection < Engineering Tables Jump to: navigation, search Signal Fourier transform unitary, angular frequency Fourier transform unitary, ordinary frequency Remarks 10 The rectangular pulse and the normalized sinc function 11 Dual of rule 10. The rectangular function is an idealized low-pass filter, and. A fourier transform implicitly repeats indefinitely, as it is a transform of a signal that implicitly repeats indefinitely. Note that when you pass y to be transformed, the x values are not supplied, so in fact the gaussian that is transformed is one centred on the median value between 0 and 256, so 128 How to calculate the Fourier transform of a Gaussian function So, the plots for gaussian, fourier(gaussian), inverse_fourier(fourier(gaussian)) are the following:Initial, Fourier, Inverse Fourier. Using plt.imshow(), I additionally plot fourier of gaussian: plt.imshow(F) plt.colorbar() plt.show() The result is as follows: imshow. That doesn't make sense The phase will become totally null adding a threshold separately on the real and imaginary parts of the Fourier transform. re(abs(re) < 1e-10) = 0; imm(abs(imm) < 1e-10) = 0 Figure10-2. The Fourier transform of a single square pulse. This function is sometimes called the sync function. Vector Spaces in Physics 8/6/2015 10 - 5 0 1/ x ( ) lim 2 0x 2 a a x a (10-14) This function, shown in figure 10-3, is a rectangular pulse of width a and height h = 1/a. Its area is equal to A f x dx h a( ) 1 , so it satisfies the integral requirement for the delta function. And in. TheFourierTransform Gaussian e t2 Gaussian e u 2 Differentiation d dt Ramp 2 iu The Fourier Transform: Examples, Properties, Common Pairs Properties: Notation Let F denote the Fourier Transform: F = F (f) Let F 1 denote the Inverse Fourier Transform: f = F 1 (F ) The Fourier Transform: Examples, Properties, Common Pairs Properties: Linearity Adding two functions together adds their Fourier Transforms together: F. 2. Obtain the Fourier transform of the image with padding: F=fft2(f, PQ(1), PQ(2)); 3. Generate a filter function, H, the same size as the image 4. Multiply the transformed image by the filter: G=H.*F; 5. Obtain the real part of the inverse FFT of G: g=real(ifft2(G)); 6. Crop the top, left rectangle to the original size: g=g(1:size(f, 1), 1:size(f, 2)); 2.2 Example: Applying the Sobel Filter. The Fourier transform 11-2. Fo urier transform and Laplace transform Laplace transform of f F (s)= ∞ 0 f (t) e − st dt Fourier tra nsform of f G (ω)= ∞ −∞ f (t) e − jωt dt very similar definition s, with two differences: • Laplace transform integral is over 0 ≤ t< ∞;Fouriertransf orm integral is over −∞ <t< ∞ • Laplace transform: s can be any complex number in. link of phase space in statistical physics video*****https://youtu.be/nasckugngvclink of size of a phase sp.. This corrects the sinusoidal behaviour, and removes the maginary part, but does not improve the amplitude result (same as absolute value of unshifted case). I will however be taking the absolute value in any case. Here is the code: def test_gauss_1D (self,a,f_c,delta_f): delta_t = 1.0/ (2.0*f_c) N = int (np.ceil (1/ (delta_f*delta_t)))+1 if (N %. Find the Fourier transform of the Gaussian function f(x) = e−x2. Start by noticing that y = f(x) solves y′ +2xy = 0. Taking Fourier transforms of both sides gives (iω)ˆy +2iyˆ′ = 0 ⇒ ˆy′ + ω 2 ˆy = 0. The solutions of this (separable) differential equation are yˆ = Ce−ω2/4. We find that C = ˆy(0) = 1 √ 2π Z∞ −∞ e. If the convolving optical point-spread function causing defocus is an isotropic Gaussian whose width represents the degree of defocus, it is clear that defocus is equivalent to multiplying the 2D Fourier transform of a perfectly focused image with the 2D Fourier transform of the defocusing (convolving) Gaussian. This latter quantity is itself just another 2D Gaussian within the Fourier. I do know that the Fourier transform of a 1D Gaussian function f(x)=e-ax 2 is measured using the following functional:$$ \mathcal{F_x(e^{-ax^2})(k)}=\sqrt{\frac{\pi}{a}}e^{\frac{-\pi^2k^2}{a}}$$ My questions are 1) how can I calculate the Fourier transform for the 2D anisotropic Gaussian function g(x,y)? 2) why are there two spatial standard deviations (σ x, and σ x') defined in the Gaussian. image processing - Why Does 2D FFT of Gaussian Looks More The transform looks like. exp (-kr^2* (-a-i*b)/ (4* (a^2+b^2)) [1] where kr is the radial spacial frequency coordinate. Consequently you have the radial oscillations of plot 6 and 7. A more traditional, real gaussian has b=0 and the fft is shown in plots 8 and 9 This next activity is all about the properties and applications of the 2D Fourier Transform. Anamorphic Property of FT of Different 2D Patterns. In the FT process, a signal of X dimension transforms to a 1/X dimension. This means that if a signal appears wide on an axis, it will appear narrow in the spatial frequency axis Consider what happens to the previously mentioned real-space Gaussian, and its Fourier transform, in the limit , or, equivalently, . There is no difficulty in seeing, from Equation , that (718) In other words, the real space Gaussian morphs into a function that takes the constant value unity everywhere. The Fourier transform is more problematic. In the limit , Equation yields a -space function. The Fourier transform of the derivative of a function is H-iwL times the Fourier transform of the function. For each differentiation, a new factor H-iwL is added. So the Fourier transforms of the Gaussian function and its first and second order derivatives are: s=.;Simplify@FourierTransform@ 8gauss@x,sD,∑xgauss@x,sD,∑8x,2<gauss@x,sD<,x,wD,s>0D 9 ‰-1ÅÅÅ 2s 2w2 ÅÅÅÅÅÅÅÅè. First, Gaussian Signal stays Gaussian under Fourier Transform. As you can see, the parameter which multiplies the variable is inverted. Let's say $ a = 5 $, then it means that in time we will have very sharp and thin Gaussian while in frequency we will have very smooth and wide Gaussian. This is related to other property of Forier Transform. In simple words, what's thin on Time / Spatial. In this activity, we will further explore the properties of the 2D Fourier transform such as the anamorphic property and rotation property of the 2D Fourier transform of different patterns. I. Anamorphic Property of the FT of 2D patterns . Anamorphism is the inverse relation between the space dimension of a function or image and its spatial frequency dimension upon performing the Fourier. Fourier Transform of Gaussian Cuthbert Nyack. The gaussian is an example of a self reciprocal function, ie both function and its transform has the same form. The time and frequency functions are shown below. In the applet below, f(t) is in red, F(w) is in green. The product is shown in yellow. both time and frequency ranges are ±2.5. The product has maximum width when a = 0.5. Return to main. Shows that the Gaussian function exp( - a. t. 2) is its own Fourier transform. For this to be integrable we must have Re(a) > 0. common in optics . a>0. the transform is the function itself 0 the rectangular function. J (t) is the Bessel function of first kind of order 0, rect. is n Chebyshev polynomial of the first kind. it's the generalization of the previous transform; T (t) is the . U. n. Computation of 2D Fourier transforms and diffraction integrals using Gaussian radial basis functions A. Mart´ınez-Finkelshtein a,b, ´, D. Ramos-Lopeza, D. R. Iskanderc aDepartment of. Lecture 2: Fourier Transforms, Delta Functions and Gaussian Integrals In the rst lecture, we reviewed the Taylor and Fourier series. These where both essentially ways of decomposing a given function into a di er-ent, more convenient, or more meaningful form. In this lecture, we review the generalization of the Fourier series to the Fourier transformation. In the context, it is also natural to. 2.3.2 Why Gaussian Filter is efficient to remove noise? Fourier Transform Before getting into the answer for this question, we need to know the Fourier transform first The Schwartz Class 2 3. The Fourier Transform and Basic Properties 4 4. Fourier Inversion 8 5. The Uncertainty Principle 13 6. The Amrein-Berthier Theorem 15 Acknowledgments 17 References 17 1. Introduction For certain well-behaved functions from the real line to the complex plane, one can de ne a related function which is known as the Fourier transform. The Fourier transform of a function f. It always takes me a while to remember the best way to do a numerical Fourier transform in Mathematica (and I can't begin to figure out how to do that one analytically). So I like to first do a simple pulse so I can figure it out. I know the Fourier transform of a Gaussian pulse is a Gaussian, so . pulse[t_] := Exp[-t^2] Cos[50 t Phase of 2D Gaussian Fourier Transform. Learn more about gaussian 3d, gaussian 2d, fft, 2d-fft, phase fourier transform 2d D.2). The Fourier transform of a Gaussian is thus F(q) = A √ π a e−q2/(4a2) (E.2) which is itself a Gaussian. It is instructive to consider the width ∆x (full width at half maximum) of the Gaussian function and the width ∆q of its Fourier transform. From Eq. (E.1), ∆x=2 p loge(2)/a, and from Eq. (E.2), ∆q=4a p loge(2). The productof the widths is a constant equal to ∆x∆q. 2.1 Properties of the Fourier Transform The Fourier transform has a range of useful properties, some of which are listed below. In most cases the proof of these properties is simple and can be formulated by use of equation 3 and equation 4.. The proofs of many of these properties are given in the questions and solutions at the back of this booklet g(ω) = 1 2 [δ(ω + Ω) + δ(ω − Ω)]. The Fourier transform of a pure cosine function is therefore the sum of two delta functions peaked at ω = ± Ω. This result can be thought of as the limit of Eq. (9.16) when κ → 0. In this case we are dealing with a function f(t) with Δt = ∞ and a Fourier transform g(ω) with Δω = 0 Computation of Fourier Transform of an Input Image followed by application of Gaussian and Butterworth Low Pass filters. - bneogy92/2D-Fast-Fourier-Transform The Fourier transform analyzes a signal in terms of its frequencies, transforms convolutions into products, and transforms Gaussians into Gaussians. The Weierstrass transform is convolution with a Gaussian and is therefore multiplication of the Fourier transformed signal with a Gaussian, followed by application of the inverse Fourier transform. This multiplication with a Gaussian in frequency. Understanding Gabor Filter - GitHub Page e k2t+ikx dk = p 1 4ˇ t e 1 4 t x2: (For the last step, we can compute the integral by completing the square in the exponent. Al-ternatively, we could have just noticed that we've already computed that the Fourier transform of the Gaussian function p 1 4ˇ t e 21 4 t x2 gives us e k t.) Finally, we need to know the fact that Fourier. The Fourier transform of g (t) has a simple analytical expression , such that the 0th frequency is simply root pi. If I try to do the same thing in Python: N = 1000 t = np.linspace (-1,1,N) g = np.exp (-t**2) h = np.fft.fft (g) #This is the Fourier transform of expression g. Simple enough. Now as per the docs h [0] should contain the zero. Reconstruction of 2D Gaussian phase from an incomplete fringe pattern using Fourier transform profilometry Authors. Jayson Puti Cabanilla National Institute of Physics, University of the Philippines Diliman Nathaniel Hermosa National Institute of Physics, University of the. Computation of 2D Fourier transforms and diffraction Consider a white Gaussian noise signal $ x \left( t \right) $. If we sample this signal and compute the discrete Fourier transform, what are the statistics of the resulting Fourier amplitudes ation of molecular structures for both theoretical and practical reasons. On the theory side, it describes diffraction patterns and images that are obtained in the electron microscope. It is also the basis of 3D reconstruction algorithms. In the practical processing of EM images the FT is also useful because many operations, such as image. Fourier Transform is used to analyze the frequency characteristics of various filters. For images, 2D Discrete Fourier Transform (DFT) is used to find the frequency domain. A fast algorithm called Fast Fourier Transform (FFT) is used for calculation of DFT. Details about these can be found in any image processing or signal processing textbooks In x 2, our notations of Fourier transforms are defined, and the distribution function of Fourier modes in Gaussian random fields are reviewed. In x 3, methods to derive distribution function in general non-Gaussian fields are de- tailed, and an explicit expression up to second order is obtained. In x 4, N-point distribution functions of Fourier modes are introduced. Amplitude of discrete Fourier transform of Gaussian is incorrect. Ask Question Asked 7 years, 2 months ago. Active 7 years, 2 months ago. Viewed 2k times 0 $\begingroup$ I am trying to understand why the amplitude of the FFT (computed with numpy) of a Gaussian differs from its analytic solution. The $\mathcal{F}\{e^{-\pi t^2}\} = e^{-\pi f^2}$. However if I calculate it with the FFT function. Fourier transform - Wikipedi My discrete Fourier transform actually gives the result that I expected (The continuous Fourier transform of a real valued Gaussian function is a real valued Gaussian function too). In short: Why is the real part of fftgauss oscillating? Best Answer. If T = N*dt and Fs = 1/T. t = linspace(- (T - dt)/2 , (T - dt)/2 , N ) % N odd . or. t = linspace( - T/2 , T/2 - dt , N ) % N even. and. f. Gaussian - Gaussian (inverse variance) Common Transform Pairs Summary. Quiz What is the FT of a triangle function? Hint: how do you get triangle function from the functions shown so far? Triangle Function FT. Triangle = box convolved with box So its FT is sinc * sinc. Fourier Transform of Images • Forward transform: • Backward transform: • Forward transform to freq. yields complex. Computational Efficiency. Using the Fourier transform formula directly to compute each of the n elements of y requires on the order of n 2 floating-point operations. The fast Fourier transform algorithm requires only on the order of n log n operations to compute. This computational efficiency is a big advantage when processing data that has millions of data points The goals for the course are to gain a facility with using the Fourier transform, both specific techniques and general principles, and learning to recognize when, why, and how it is used. Together with a great variety, the subject also has a great coherence, and the hope is students come to appreciate both. Topics include: The Fourier transform as a tool for solving physical problems Zheng, C.: Fractional Fourier transform for a hollow Gaussian beam. Phys. Lett. A 355, 156-161 (2006) Article ADS Google Scholar Zhou, G.: Fractional Fourier transform of Lorentz-Gauss beams. J. Mod. Opt. 56, 886-892 (2009) Article ADS Google Schola The Fourier Transform can be used for this purpose, which it decompose any signal into a sum of simple sine and cosine waves that we can easily measure the frequency, amplitude and phase. The Fourier transform can be applied to continuous or discrete waves, in this chapter, we will only talk about the Discrete Fourier Transform (DFT). Using the DFT, we can compose the above signal to a series. After that, Fourier transform it was evidence that Fourier transform can be applied everywhere and in such case, you can implement, for example, the convolution really fast if the size of the input signal and the size of the input kernel are rather high. Let me show you how to use Fourier transformation for image processing. There are several filters. We will consider only the most simple ones. Figure 2: Spectral and temporal profile of a Gaussian pulse with the spectrum clipped below 794nm. The DnFWHMDtFWHM = 0.55 even though the pulse is transform limited. Note the broadened pulsewidth (125 fs) as compared to the pulsewidth in Figure 1. 1.0 0.8 0.6 0.4 0.2 0.0-300 -200 -100 0 100 200 300 Time (fs) 785 790 795 800 805 810 815. 2 $F(u)=e^{-\pi u^2}$. Plugging $a=1$, $b=\pi$, and $c=0$ into Eq. \eqref{eq:fourier_3} gives us \begin{equation} \begin{split} f(x) &=\sqrt{\pi/\pi}\,e^{-\pi 0^2+\pi. On the other hand, the FRFT is an extension of the conventional Fourier transform, was first introduced by Ozaktas and Mendlovic into Wang, X., Liu, Z., Zhao, D.: Fractional Fourier transform of hollow sinh-Gaussian beams. Opt. Engineer. 53, 086112-086117 (2014) Google Scholar Wang, X., Zhao, D.: Simultaneous nonlinear encryption of grayscale and color images based on phase-truncated. numpy - Fourier transform of a Gaussian is not a Gaussian Fourier Transform of Array Inputs. Find the Fourier transform of the matrix M. Specify the independent and transformation variables for each matrix entry by using matrices of the same size. When the arguments are nonscalars, fourier acts on them element-wise Step 1: Compute the 2-dimensional Fast Fourier Transform. The result from FFT process is a complex number array which is very difficult to visualize directly. Therefore, we have to transform it into 2-dimension space. Here are two ways that we can visualize this FFT result: 1. Spectrum 2. Phase angle. Figure (d): (from left to right) (1) Spectrum (2) Phase Angle. From Figure (d)(1), there are. Millones de Productos que Comprar! Envío Gratis en Pedidos desde $59 ed by the contribution from a certain frequency component while the phase carries. Computation of 2D Fourier transforms and diffraction integrals using Gaussian radial basis functions A. Mart´ınez-Finkelshtein a,b, ´, D. Ramos-Lopeza, D. R. Iskanderc aDepartment of Mathematics, University of Almer´ıa, Spain bInstituto Carlos I de F´ısica Te ´orica y Computacional, Granada University, Spain cDepartment of Biomedical Engineering, Wroclaw University of Technology. The Fourier transform of the Gaussian is Fg: R ! R; Fg(˘) = Z R g(x) ˘ (x)dx: Note that Fgis real-valued because gis even. We have the derivatives @ @˘ ˘ (x) = 2ˇix ˘ (x); d dx g(x) = 2ˇxg(x); @ @x ˘ (x) = 2ˇi˘ ˘ (x): To study the Fourier transform of the Gaussian, di erentiate under the integral sign, then use the rst two equalities in the previous display, then integrate by parts. 2D Discrete Fourier Transform • Fourier transform of a 2D signal defined over a discrete finite 2D grid of size MxN or equivalently • Fourier transform of a 2D set of samples forming a bidimensional sequence • As in the 1D case, 2D-DFT, though a self-consistent transform, can be considered as a mean of calculating the transform of a 2D sampled signal defined over a discrete grid. • The. A. Gaussian Fourier Transform. The command linspace gives a vector of N linearly spaced numbers between an upper and lower bound. We can combine this with meshgrid to generate a domain for creating and plotting functions. Create a 256×256 domain over -20 to 20 as follows. [xx,yy] = meshgrid( linspace(-20,20,256), linspace(-20,20,256) ); Using its functional form, g(x,y)=exp(−(x 2 +y 2)/2. • Thus the 2D Fourier transform maps the original function to a complex-valued function of two frequencies 35 f(x,y)=sin(2π⋅0.02x+2π⋅0.01y) Three-dimensional Fourier transform • The 3D Fourier transform maps functions of three variables (i.e., a function defined on a volume) to a complex-valued function of three frequencies • 2D and 3D Fourier transforms can also be computed. The Fourier transform of the multidimentional generalized Gaussian distribution January 2011 International Journal of Pure and Applied Mathematics 67(4):443-45 The Fourier transform of a Gaussian - YouTub Expression (1.2.2) is called the Fourier integral or Fourier transform of f. Expression (1.2.1) is called the inverse Fourier integral for f. The Plancherel identity suggests that the Fourier transform is a one-to-one norm preserving map of the Hilbert space L2[1 ;1] onto itself (or to another copy of it-self). We shall show that this is the case. Furthermore we shall show that the pointwise. Gaussian derivative kernels act like bandpass filters. Task 1: Show with partial integration and the definitions from section 3.10 that the Fourier transform of the derivative of a function is (-iω) times the Fourier transform of the function. Task 2: Note that there are several definitions of the signs occurring in the Fourier transform is the Gaussian function. Its Fourier transform also is a Gaussian function, but in the frequency domain. The Fourier transform relation between widths of Gaussians in the time domain and frequency domain is also very simple: σ tσ ω=1. This equation clearly shows the inverse relation between time domain and frequency domain functions. While this simple relation (σ tσ ω=1) only. To find the Fourier Transform of the Complex Gaussian, we will make use of the Fourier Transform of the Gaussian Function, along with the scaling property of the Fourier Transform. To start, let's rewrite the complex Gaussian h(t) in terms of the ordinary Gaussian function g(t): [Equation 2] Now, we'd like to use the scaling property of the Fourier Transform directly, but note that the. There are actually many different Fourier transforms, as you can learn about in this post: https://www.quora.com/q/vxyolmbprxfkpixg/Integral-Transforms-Part-I-Weak. 2 Gaussian filters Remove high the product of their Fourier transforms F[g * h] = F[g]F[h] The inverse Fourier transform of the product of two Fourier transforms is the convolution of the two inverse Fourier transforms F-1[g * h] = F-1[g]F-1[h] Convolution in spatial domain is equivalent to multiplication in frequency domain. Derivative theorem of convolution This saves us one operation. the Gaussian (54) with standard deviation σ > 0. Since Gσ and f belong to L1, so does Gσ ∗f, and since Gcσ decays expo-nentially, G\ σ ∗f = Gcσfbbelongs to L1. Hence by Plancherel's formula for L1 functions with L1 Fourier transform (Theorem 2.1 2)) and the explicit formula for the Fourier transform of a Gaussian (Example 2, Section. 2 Fourier transform of a power Theorem 2 Let 1 < a < n. The Fourier transform of 1/|x|a is Ca/|k|n−a, where Ca = (2π) n 2 2n−a 2 Γ(n−a 2) 2a 2 Γ(a 2). (10) This is not too difficult. It is clear from scaling that the Fourier transform of 1/|x|a is C/|k|n−a. It remains to evaluate the constant C. Take the inner product with the Gaussian. This gives Z Rn (2π)−n2 e− x2 2 1 |x|a. However, you could expand the imaginary exponential in a power series and perform the integral term-by-term to get a power series representation of the fourier transform. In this case, the following integral (3.326-2) is useful: ∫ 0 ∞ d x x m exp. ⁡. ( − β x n) = Γ ( γ) n β γ, where γ = ( m + 1) / n The two fourier transforms (image and filter) are multiplied, and the inverse fourier transform is obtained. The result is a filtered version of the original image shifted by (kernel diameter - 1)/2 toward the end of each dimension. The data is shifted back by (kernel diameter - 1)/2 to the start of each dimension before the image is stripped to the original dimensions. In every dimension the. Example: the Fourier Transform of a Gaussian, exp(-at2), is itself! 22 2 {exp( )} exp( )exp( ) exp( /4 ) at at i t dt a ω ω ∞ −∞ −=− − ∝− F ∫ 0! t! exp( )− at 2 0! w! exp( /4 )−ω2 a The details are a HW problem! ∩. The Dirac delta function Unlike the Kronecker delta-function, which is a function of two integers, the Dirac delta function is a function of a real. The Fourier transform of a Gaussian is well known to be another Gaussian function, as the plot confirms. I adjusted the width of each Gaussian so that the widths would be about equal in both domains. The Gaussians were sampled at various values of n, increasing in steps by a factor of 4. You can measure the width dropping by a factor of 2 at each step. For those of you who have already learned. C : jcj= 1g. So, the fourier transform is also a function fb:Rn!C from the euclidean space Rn to the complex numbers. The gaussian function ˆ(x) = e ˇ kx 2 naturally arises in harmonic analysis as an eigenfunction of the fourier transform operator. Lemma 2 The gaussian function ˆ(x) = e ˇkxk2 equals its fourier transform ˆb(x) = ˆ(x. Let's compute, G(s), the Fourier transform of: g(t) =e−t2/9. We know that the Fourier transform of a Gaus-sian: f(t) =e−πt2 is a Gaussian: F(s)=e−πs2. We also know that : F {f(at)}(s) = 1 |a| F s a . We need to write g(t) in the form f(at): g(t) = f(at) =e−π(at)2. Let a = 1 3 √ π: g(t) =e−t2/9 =e−π 1 3 √ π t 2 = f 1 3 √ π t . It follows that: G(s) =3 √ πe−π(3 � The Fourier Transform Overview . The Fourier Transform is important for two key reasons: Sine waves are easy to work with mathematically, and Sine waves form a basis over the space of functions. That is, just like you can express any point in a 2D plane as a sum of an component and a component, with an appropriate coefficient multiplying each unit vector, you can express any function as a sum. $\begingroup$ Also, if you write code for Fresnel, it will work in the far-field (Fraunhoffer) zone. I'll edit the above for the scales which are valid for each approximation. I believe that the Fresnel approximation is more stable numerically because some of the high frequency components of the actual free space transfer function are not well approximated when they are discretized Fourier Transform of the Gaussian Beam. Loading... Optical Efficiency and Resolution. University of Colorado Boulder 4.1 (40 ratings) We will discuss a few Fourier Transforms that show up in standard optical systems in the first subsection and use these to determine the system resolution, and then discuss the differences between coherent and incoherent systems and impulse responses and. Discrete Fourier Transform . See section 14.1 in your textbook. This is a brief review of the Fourier transform. An in-depth discussion of the Fourier transform is best left to your class instructor. The general idea is that the image (f(x,y) of size M x N) will be represented in the frequency domain (F(u,v)). The equation for the two-dimensional discrete Fourier transform (DFT) is: The. The Fourier transform of a complex Gaussian can also be derived using the differentiation theorem and its dual (§ B.2 ). D.1. as expected. The Fourier transform of complex Gaussians (`` chirplets '') is used in § 10.6 to analyze Gaussian-windowed ``chirps'' in the frequency domain . Why Gaussian Die Fourier-Transformation (genauer die kontinuierliche Fourier-Transformation; Aussprache: [fuʁie]) ist eine mathematische Methode aus dem Bereich der Fourier-Analyse, mit der aperiodische Signale in ein kontinuierliches Spektrum zerlegt werden. Die Funktion, die dieses Spektrum beschreibt, nennt man auch Fourier-Transformierte oder Spektralfunktion Its Fourier transform is also a Gaussian function, F(v) = (1/2&a,) exp( - u2/4cV2), with power-rms width 1 *=-zzq. l.J (A.2-4) Since ataV = 1/47r, the Gaussian function has the minimum permissible value of the duration-bandwidth product. In terms of the angular frequency w = 27rv, uto- 2 ;. (A.2-5) If the variables t and w, which usually describe time and angular frequency (rad/s), are. Note that the Fourier transform of a Gaussian is another Gaussian (although lacking the normalisation constant). There is a phase term, corresponding to the position of the center of the Gaussian, and then the negative squared term in an exponential. Also notice that the standard deviation has moved from the denominator to the numerator. This means that, as a Gaussian in real space gets. Remark 4.2: Extensive numerical experiments show that n = 16 gives, for all smooth functions, results attaining the machine precision. For double precision, we choose a = 44/M 2 and M 2 = 8M.For n = 16, we need 8 Laplace transform values for the quadrature rule and we use an oversampling factor of M 2 /M = 8; thus, on average, we need 64 Laplace transform values for the computation of 1. numpy - Fourier Transform in Python 2D - Stack Overflo 2- and N-D discrete Fourier transforms ¶ The functions fft2 and ifft2 provide 2-D FFT and IFFT, respectively. Similarly, fftn and ifftn provide N-D FFT, and IFFT, respectively. For real-input signals, similarly to rfft, we have the functions rfft2 and irfft2 for 2-D real transforms; rfftn and irfftn for N-D real transforms Fourier Transform Definition of Fourier Transform. The Fourier transform is a representation of an image as a sum of complex exponentials of varying magnitudes, frequencies, and phases. The Fourier transform plays a critical role in a broad range of image processing applications, including enhancement, analysis, restoration, and compression The Fourier transform of the Gaussian function is proportional to the Gaussian function. This fact is often underlined but it is not uniqe. There are many functions which have the same form as their Fourier transform (e.g. |x|−1/2 (cf. section 1.3.7), P ∞ n=−∞ δ(x−n) (cf. section 4.3) and others). Note: There are other ways how to calculate the Fourier transform of the Gaussian. If X is a vector, then fft(X) returns the Fourier transform of the vector.. If X is a matrix, then fft(X) treats the columns of X as vectors and returns the Fourier transform of each column.. If X is a multidimensional array, then fft(X) treats the values along the first array dimension whose size does not equal 1 as vectors and returns the Fourier transform of each vector Figures 2(c) and 2(e) are the results with Δ t = 1 × 10 − 3 fs and Figs. 2(d) and 2(f) are those with Δ t = 2 × 10 − 4 fs. The shapes of the core excitation spectra from the C 1s orbital in Figs. 2(c) and 2(d) are slightly different from each other and their peak positions have deviations of more than 0.1 eV, as shown in Table II 7. The Dilated Gaussian and its Fourier Transform The just-mentioned problems are circumvented by the Gaussian trick. It requires the Fourier transform of the n-dimensional dilated Gaussian function. To begin, recall that the one-dimensional Gaussian function,: R ! R; (x) = e x2=2; is its own Fourier transform under our rescaled measure. (Here. The array is multiplied with the fourier transform of a Gaussian kernel. Parameters input array_like. The input array. sigma float or sequence. The sigma of the Gaussian kernel. If a float, sigma is the same for all axes. If a sequence, sigma has to contain one value for each axis. n int, optional. If n is negative (default), then the input is assumed to be the result of a complex fft. If n is. Fourier Transform Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines Fourier Series and Fourier integral Fourier Transform (FT) Discrete Fourier Transform (DFT) Aliasing and Nyquest Theorem 2D FT and 2D DFT Application of 2D-DFT in imaging Inverse Convolution Discrete Cosine Transform (DCT) Sources: Forsyth and Ponce, Chapter 7 Burger and Burge Digital Image Processing. The Fourier transform of a Gaussian pulse is also a Gaussian pulse. A. True B. False Answer: A Clarification: Gaussian pulse, x(t) = e-πt 2 Its Fourier transform is X(f) = e-πf 2 Hence, the Fourier transform of a Gaussian pulse is also a Gaussian pulse. 4. Find the Fourier transform of f(t)=te-at u(t). A. (frac{1}{(a-jω)^2} ) B. (frac{1}{(a+jω)^2} ) C. (frac{a}{(a-jω)^2} ) D. (frac{ω}{(a. The continuous Fourier transform of a real valued Gaussian function is a real valued Gaussian function too... In order to answer this question, I have written a simple discrete Fourier transform, see below. dftgauss = zeros(128); for n = 1:128 . for m = 1:128. dftgauss(n) = dftgauss(n) + gauss(m)*exp(2.0*pi*i*fn(n)*tn(m)); end. end. and dftgauss is shown below. Clearly, fftgauss and dftgauss. Phase of 2D Gaussian Fourier Transform - MATLAB Answers the inverse Fourier transform the Fourier transform of a FOURIER TRANSFORM OF Exp Amplitude of discrete Fourier transform of Gaussian is 2d Fourier Transform - an overview ScienceDirect Topic Fourier Transform of a 2D Anisotropic Gaussian Function Question about fft2(2D Fourier Transform) of a Gaussian Properties and Applications of the 2D Fourier Transform PokerStars Vegas App Download. Ex Police boats for sale UK. GTX 1070 ETH mining settings. Acropolis tickets. Last Bitcoin halving. City pack courier hambantota. Abendschule Güstrow. Boliden huvudkontor. Negative voltage circuit. Uniswap price prediction Reddit. Fossil retoure. Stora Enso Österreich. Mein A1 App. Litecoin kopen of niet. Newton Reddit Canada. Hydropool warranty. CoinJar Support. Edeka Lieferservice Erfahrungen. AMD Aktie Kursziel. Broker selbständig. Retracement meaning in chinese. Weibliche Pferdenamen mit U. VICE TV Deutschland. Pnyx Volksversammlung. Lager 157 ulricehamn Öppettider. VPS тарифы. Kool Savas Bitcoin. Umsätze Englisch. ImmobilienScout24 Telefonnummer. Trailmakers Demo Version. Pizzawerk Köpenick. Windscribe kostenlos. Cicero consul factus Sergii Catilinae. Ascii math Symbols. OTTO Ratenzahlung. 100€ dividende pro monat. VTG Hauptversammlung 2020. Avira Forum. Newport optics.
CommonCrawl
The design and evaluation of hybrid controlled trials that leverage external data and randomization Steffen Ventz, Sean Khozin, … Lorenzo Trippa Association between control group therapy and magnitude of clinical benefit of cancer drugs Consolacion Molto, Ariadna Tibau, … Eitan Amir Evaluating eligibility criteria of oncology trials using real-world data and AI Ruishan Liu, Shemra Rizzo, … James Zou Assessment of proportional hazard assumption in aggregate data: a systematic review on statistical methodology in clinical trials using time-to-event endpoint Eliana Rulli, Francesca Ghilotti, … Valter Torri The future of clinical trials in urological oncology Vikram M. Narayan & Philipp Dahm Randomised Phase 1 clinical trials in oncology Alexia Iasonos & John O'Quigley Advantages of multi-arm non-randomised sequentially allocated cohort designs for Phase II oncology trials Helen Mossop, Michael J. Grayling, … James M. S. Wason Interpreting clinical trial data in multiple myeloma: translating findings to the real-world setting Paul G. Richardson, Jesus F. San Miguel, … Kenneth C. Anderson Use of Sequential Multiple Assignment Randomized Trials (SMARTs) in oncology: systematic review of published studies Giulia Lorenzoni, Elisabetta Petracci, … Oriana Nanni Adam Brand ORCID: orcid.org/0000-0003-1300-52921, Michael C. Sachs ORCID: orcid.org/0000-0002-1279-86761,2, Arvid Sjölander1 & Erin E. Gabriel1,2 British Journal of Cancer (2023)Cite this article Predictive markers Medical advances in the treatment of cancer have allowed the development of multiple approved treatments and prognostic and predictive biomarkers for many types of cancer. Identifying improved treatment strategies among approved treatment options, the study of which is termed comparative effectiveness, using predictive biomarkers is becoming more common. RCTs that incorporate predictive biomarkers into the study design, called prediction-driven RCTs, are needed to rigorously evaluate these treatment strategies. Although researched extensively in the experimental treatment setting, literature is lacking in providing guidance about prediction-driven RCTs in the comparative effectiveness setting. Realistic simulations with time-to-event endpoints are used to compare contrasts of clinical utility and provide examples of simulated prediction-driven RCTs in the comparative effectiveness setting. Our proposed contrast for clinical utility accurately estimates the true clinical utility in the comparative effectiveness setting while in some scenarios, the contrast used in current literature does not. It is important to properly define contrasts of interest according to the treatment setting. Realistic simulations should be used to choose and evaluate the RCT design(s) able to directly estimate that contrast. In the comparative effectiveness setting, our proposed contrast for clinical utility should be used. Advances in cancer treatment have allowed the development of multiple approved treatments for many types of cancer. Some of these treatments target-specific sub-types is defined by a biological mechanism. Identifying the optimal treatment for patients from multiple approved options is complex; the study of which is termed comparative effectiveness. Biomarker signatures, comprising one or more biomarker measurements, are used to determine the biological targets for specific treatments and/or to identify those patients expected to benefit from them. The use of these predictive biomarkers in cancer treatment is commonplace [1,2,3,4,5,6,7,8,9,10]. All treatments, targeted or otherwise, should provide compelling evidence of benefit compared to other approved treatment options through confirmatory randomised controlled trials (RCTs). RCTs designed specifically to evaluate treatments incorporating predictive biomarkers, or prediction-driven RCTs, are essential to future drug development and prescription [11]. Hu and Dignam provide an overview of the key concepts of prediction-driven RCTs [9]. Prediction-driven RCTs can be used to refine patient populations and identify superior treatment strategies when new biomarker signatures and treatments become available. NCI-MATCH is a high-profile platform trial that evaluates biomarker-directed treatment strategies, also referred to as prediction-driven decision rules, for underexplored cancer types [12]. ProBio is a platform RCT to identify new biomarker-directed treatment strategies that improve patient outcomes in metastatic castrate-resistant prostate cancer, currently comparing only among approved treatments [13]. The SHIVA trial evaluates molecular profiling to direct treatment of metastatic solid tumours [14]. Renfro et al. provide a review of prediction-driven RCTs along with additional examples of such trials [8]. The focus of this paper is the confirmatory comparative effectiveness setting, so we will limit focus of prediction-driven RCTs to trial designs amenable to frequentist analyses that can reliably control type 1 error. The prediction-driven RCT designs relevant to this setting fall into three categories: enrichment, biomarker-stratified and biomarker-strategy [8]. Although compared and evaluated extensively in the experimental treatment setting, these designs have not, to our knowledge, been evaluated and compared in a comparative effectiveness setting, which requires special consideration of clinical utility. Clinical utility of a biomarker signature, defined as the improvement in patient outcome from having knowledge of the biomarker, is as important or more important than clinical validity, defined as the ability of a biomarker to accurately predict the effect of treatment; however, clinical utility is often overlooked [15]. A biomarker signature can have high clinical validity while also having little to no clinical utility. For example, consider a patient population in which a biomarker-signature accurately predicts which subgroup benefits most from which treatment, for example, high-risk patients benefit from aggressive treatment while low-risk patients benefit from milder treatment. If a physician can also classify patients accurately based on information routinely collected outside of the biomarker signature, then it has no clinical utility, because knowledge of the biomarker does not improve patient outcomes over the standard of care. Note that a biomarker cannot have clinical utility without clinical validity, because knowledge of a biomarker cannot improve outcomes if it cannot predict the outcome of treatment. Biomarker signatures can be costly and invasive for patients and should be avoided if they do not lead to improved outcomes, so the clinical utility of a biomarker signature should be rigorously evaluated before adopting into standard practice [15]. Evaluating clinical utility in RCTs depends heavily on the treatment setting. Consider for example the experimental treatment setting. What does it mean to be treated without knowledge of the biomarker in an experimental treatment setting? Even without knowledge of the biomarker the new treatment may improve outcomes. Authors who have evaluated the performance of RCT designs to estimate clinical utility in the experimental setting, such as Shih and Lin [16, 17] and Sargent and Allegra [18], have defined standard of care in the experimental setting as a randomised mix of experimental treatment and existing treatment. Comparing this standard of care to the biomarker-directed arm, one can test the global null that the biomarker-directed is the same as undirected treatment assignment. This definition of standard of care may be useful during the developmental stage in the experimental setting. In the comparative effectiveness setting, which is the focus of this paper, the treatment the patient would normally receive without knowledge of the biomarker status, as directed by a physician, is a more relevant definition of standard of care. We will refer to this definition of standard of care as physician's choice throughout, and use it in our definition of clinical utility. We formally define clinical utility in section "Prediction-driven RCT designs and contrasts of interest". Evaluating clinical validity may be less valuable than evaluating clinical utility in the comparative effectiveness setting, but it can still be useful. As stated above, when a biomarker-signature is either costly or invasive, it should be evaluated for clinical utility, and clinical utility implies clinical validity. Therefore, evaluating both may not be efficient. However, when there are multiple biomarker-directed treatment strategies, as in ProBio [13], evaluating clinical validity, which typically requires smaller sample sizes [16,17,18], among the treatment strategy options before evaluating clinical utility in the best-performing treatment strategies may be a more efficient use of resources. Thus, RCTs specifically designed to detect clinical validity, or differential treatment effect between biomarker-defined subgroups, may also be useful in the comparative effectiveness setting. In prediction-driven RCTs, it is common to evaluate the treatment effect for a particular subgroup. This is typically done in the experimental setting when it is thought that only one subgroup will benefit from a, frequently targeted, treatment. This may also be of value in the comparative effectiveness setting, for example, if a new biomarker is developed that is thought to identify a subgroup from a population that may benefit from a treatment previously shown to be inferior in the overall population. In this case, it may be unethical to randomise any patient not in the identified subgroup. To our knowledge, there has not been a systematic comparison of confirmatory prediction-driven RCT designs in the comparative effectiveness setting. The prediction-driven RCT designs able to estimate clinical utility in the comparative effectiveness setting differ from those in the experimental setting, because the definition of standard of care is different. While it is still a question to us what the appropriate definition of standard of care is in the experimental setting, we propose a definition in the comparative effectiveness setting that provides easily interpretable results. The primary aims of this paper are to define and motivate the use of our proposed contrast for clinical utility in the comparative effectiveness setting for cancer treatment research. We distinguish it from the other contrasts for measuring clinical utility that have been proposed in the experimental setting both theoretically and in a simulation study. We define other statistical contrasts of interest in the comparative effectiveness setting and demonstrate the prediction-driven RCT designs that can identify estimands for each contrast under minimal assumptions. For each estimand, we describe common/useful estimation options for practitioners considering such contrasts and trials in comparative effectiveness cancer treatment research. Finally, we provide step-by-step examples for simulating realistic RCTs and guidance on designing such trials using simulation. Prediction-driven RCT designs and contrasts of interest Assume there are two approved treatment options, \(X \in \left[ {A,B} \right]\), and that there is a biomarker signature that classifies the patient population for a specific cancer into two groups, positives (M = 1) and negatives (M = 0). Also assume that the proposed biomarker-directed treatment strategy is to treat all positive patients with treatment B and all negative patients with treatment A. Let T be the event time, or time to death (progression or failure), of a patient, possibly right-censored. Using counterfactual notation common in causal inference [19], let TB be the counterfactual outcome of a patient had they been assigned to treatment B and likewise for TA. For a patient who is factually assigned to B, the factual outcome T equals the counterfactual outcome TB, whereas for a patient who is factually assigned to A, the factual outcome T equals the counterfactual outcome TA. Also, let \(T_{{{{{{{{\mathrm{biomarker - directed}}}}}}}}}\) be the counterfactual outcome of a patient had they been assigned treatment according to the biomarker-directed strategy, that is, treating biomarker-positive patients with B and negative patients with A. Finally, let \(T_{{{{{{{{\mathrm{physician - directed}}}}}}}}}\) be the counterfactual outcome of a patient had they been assigned treatment according to a physician's prescribed treatment, choosing from either A or B without knowledge of biomarker status. The outcome T often represents a potentially right-censored time-to-event variable, but the contrasts defined below extend to any outcome. Let g(·) denote a summary statistic available for the outcome, T, such as the hazard or restricted mean survival time. Also assume that, as expected in an RCT for cancer treatment, there is no interference between patients. Then, the three contrasts of interest that we consider can be represented as below, with an absolute and relative difference presented for each. Treatment effect for a subgroup $$g\left( {T_B|M = m} \right) - g\left( {T_A|M = m} \right)$$ $$\frac{{g\left( {T_B|M = m} \right)}}{{g\left( {T_A|M = m} \right)}}$$ Clinical validity (differential treatment effect between subgroups) $$\left[ {g\left( {T_B|M = 1} \right) - g\left( {T_A|M = 1} \right)} \right] - \left[ {g\left( {T_B|M = 0} \right) - g\left( {T_A|M = 0} \right)} \right]$$ $$\frac{{g\left( {T_B|M = 1} \right)}}{{g\left( {T_A|M = 1} \right)}}/\frac{{g\left( {T_B|M = 0} \right)}}{{g\left( {T_A|M = 0} \right)}} = \frac{{g\left( {T_B|M = 1} \right) \ast g\left( {T_A|M = 0} \right)}}{{g\left( {T_A|M = 1} \right) \ast g\left( {T_B|M = 0} \right)}}$$ Clinical utility (proposed for comparative effectiveness) $$g\left( {T_{{{{{{{{\mathrm{biomarker - directed}}}}}}}}}} \right) - g\left( {T_{{{{{{{{\mathrm{physician - directed}}}}}}}}}} \right)$$ $$\frac{{g\left( {T_{{{{{{{{\mathrm{biomarker - directed}}}}}}}}}} \right)}}{{g\left( {T_{{{{{{{{\mathrm{physician - directed}}}}}}}}}} \right)}}$$ Figure 1 presents four prediction-driven RCT designs common to the literature of prediction-driven RCT designs [8, 18]: the enrichment design (a), the biomarker-stratified design (b), the biomarker-strategy design (c), and the modified biomarker-strategy design (d). Table 1 summarises which prediction-driven RCT designs can directly estimate which of the three contrasts of interest in this comparative effectiveness setting. The following subsections detail the ability of each design to estimate these contrasts. Fig. 1: Prediction-driven RCT designs. The enrichment design, biomarker stratified design, biomarker strategy design and modified biomarker strategy design are depicted in panels (a), (b), (c) and (d), respectively. M denotes biomarker status and X denotes treatment assignment. Table 1 Identifiable contrasts when comparing approved treatments (A versus B) among biomarker-defined subgroups. Note that in the comparative effectiveness setting, we define clinical utility as a biomarker-directed treatment strategy versus what a physician would prescribe without knowledge of the biomarker (physician's choice). As discussed in the introduction this differs from the definition used in current literature on prediction-driven trials and alters the types of RCT designs that are able to estimate it. The contrast used in the experimental setting by Shih and Lin [17], for example, is as below where \(T_{{{{{{{{\mathrm{randomized}}}}}}}}}\) is the counterfactual outcome for a patient whose treatment assignment was randomised. Clinical utility (experimental) $$g\left( {T_{{{{{{{{\mathrm{biomarker - directed}}}}}}}}}} \right) - g(T_{{{{{{{{\mathrm{randomized}}}}}}}}})$$ $$\frac{{g\left( {T_{{{{{{{{\mathrm{biomarker - directed}}}}}}}}}} \right)}}{{g(T_{{{{{{{{\mathrm{randomized}}}}}}}}})}}$$ Specifically, while the biomarker-stratified and modified biomarker-stratified designs are able to estimate the experimental contrast for clinical utility, they are not able to estimate the proposed comparative effectiveness contrast above, as we discuss in the following subsections. Enrichment design In an enrichment design, randomisation makes ignorable treatment assignment a plausible assumption. Thus, the following equalities hold: $$g\left( {T_B|M = 1} \right) = g\left( {T|X = B,M = 1} \right)$$ $$g\left( {T_A|M = 1} \right) = g\left( {T|X = A,M = 1} \right)$$ Therefore, the contrast for treatment effect in the positive subgroup can be estimated by: $$g\left( {T|X = B,M = 1} \right) - g\left( {T|X = A,M = 1} \right)$$ $$\frac{{g\left( {T|X = B,M = 1} \right)}}{{g\left( {T|X = A,M = 1} \right)}}$$ As biomarker-negative patients are not enroled in this design, the enrichment design cannot estimate the treatment effect in the negative subgroup without assuming a treatment effect distribution in the unobserved subgroup. Therefore, the enrichment design cannot estimate differential treatment effect nor clinical utility without that strong assumption. Biomarker-stratified design The biomarker-stratified design can be viewed as two enrichment designs running in parallel, as randomisation is conducted within each of the subgroups. Therefore, (1) and (2) continue to hold while (3) and (4) are similarly true. These equalities allow for estimating the treatment effect for each subgroup as in section "Enrichment design" and the differential treatment effect as below. $$\left[ {g\left( {T|X = B,M = 1} \right) - g\left( {T|X = A,M = 1} \right)} \right] \\ - \left[ {g\left( {T|X = B,M = 0} \right) - g\left( {T|X = A,M = 0} \right)} \right]$$ $$\frac{{g\left( {T|X = B,M = 1} \right)}}{{g\left( {T|X = A,M = 1} \right)}}/\frac{{g\left( {T|X = B,M = 0} \right)}}{{g\left( {T|X = A,M = 0} \right)}} \\ = \frac{{g\left( {T|X = B,M = 1} \right) \ast g\left( {T|X = A,M = 0} \right)}}{{g\left( {T|X = A,M = 1} \right) \ast g\left( {T|X = B,M = 0} \right)}}$$ Unlike in the experimental setting referred to in Shih and Lin [16, 17] and Sargent and Allegra [18], the biomarker-stratified design cannot directly estimate clinical utility in the comparative effectiveness setting. Although randomisation allows estimating the summary statistic for the biomarker-directed arm, it cannot directly estimate the physician-directed arm, because a physician may assign treatment differently from the randomised assignment performed in the biomarker-stratified design. Biomarker-strategy design The biomarker-strategy design, by randomising to a biomarker-directed arm versus a physician's choice arm, directly evaluates the clinical utility of a biomarker signature. From randomisation it follows that, $$g\left( {T_{{{{{{{{\mathrm{biomarker}}}}}} -{{{{{\mathrm{directed}}}}}}}}}} \right) = g\left( {T|{{{{{{{\mathrm{arm}}}}}}}} = {{{{{{{\mathrm{biomarker}}}}}} \!-\!{{{{{\mathrm{directed}}}}}}}}} \right),$$ $$g\left( {T_{{{{{{{{\mathrm{physician - directed}}}}}}}}}} \right) = g\left( {T|{{{{{{{\mathrm{arm}}}}}}}} = {{{{{{{\mathrm{physician}}}}}} \!-\!{{{{{\mathrm{directed}}}}}}}}} \right)$$ And so the contrast for clinical utility in the comparative effectiveness setting is $$g\left( {T|{{{{{{{\mathrm{arm}}}}}}}} = {{{{{{{\mathrm{biomarker}}}}}} \!-\!{{{{{\mathrm{directed}}}}}}}}} \right) - g\left( {T|{{{{{{{\mathrm{arm}}}}}}}} = {{{{{{{\mathrm{physician}}}}}} \!-\!{{{{{\mathrm{directed}}}}}}}}} \right)$$ $$\frac{{g\left( {T|{{{{{{{\mathrm{arm}}}}}}}} = {{{{{{{\mathrm{biomarker}}}}}} \!-\!{{{{{\mathrm{directed}}}}}}}}} \right)}}{{g\left( {T|{{{{{{{\mathrm{arm}}}}}}}} = {{{{{{{\mathrm{physician}}}}}} \!-\!{{{{{\mathrm{directed}}}}}}}}} \right)}}$$ Note that because there is no randomisation to treatment A versus B within either subgroup, treatment effect within subgroups cannot be directly estimated with the biomarker-strategy design. Therefore, this design is not able to estimate treatment effect for either subgroup nor differential treatment effect without strong additional assumptions. Modified biomarker-strategy design The modified biomarker-strategy design, proposed by Sargent and Allegra [18] and discussed in Shih and Lin [17], is a hybrid of the biomarker-stratified and biomarker-strategy designs. It compares a biomarker-directed treatment arm to a fully randomised arm, with or without stratification by marker status. If biomarker status is not obtained in the fully randomised arm, only (1) and (4) hold true, so neither treatment effect in a subgroup nor differential treatment effect can be estimated without assuming treatment effect distributions in (2) and (3). Also, there is no direct estimate available for the outcome in the physician's choice arm, so clinical utility is also unestimable without strong additional assumptions. If biomarker status is obtained in the randomised arm, then any modified biomarker-strategy design can be replicated with a biomarker-stratified design by altering the randomisation probabilities for each subgroup to match those in the modified biomarker-strategy design. Let rpos1 be the probability of a positive patient being randomised to treatment B in a biomarker-stratified design, rstrat be the probability of being randomised to the biomarker-directed treatment arm in the modified biomarker-strategy design, and rpos2 be the probability of a positive patient who was randomised to the randomised arm in the modified biomarker-strategy design being further randomised to treatment B. Then the probability of a positive patient assigned to treatment B in the modified biomarker-strategy design is \(P\left( {M = 1,X = B|{{{{{{{\mathrm{modified}}}}}}}}\;{{{{{{{\mathrm{biomarker}}}}}}}}\;{{{{{{{\mathrm{strategy}}}}}}}}\;{{{{{{{\mathrm{design}}}}}}}}} \right) = r_{strat} + \left( {1 - r_{strat}} \right) \ast r_{pos2}\). Setting this probability equal to rpos1 ensures equal randomisation probabilities for both designs. Owing to randomisation, (1)–(4) all hold in both designs, so their ability to estimate the contrasts of interest are equal. Estimands and estimators There are multiple ways to quantify differences in the distributions of censored time-to-event outcomes, and no option is superior for every setting. In this section, we discuss four common and/or useful estimands for differences in survival: logrank statistic (LR), hazard ratio (HR), difference in survival probability at a pre-specified time (SD), and absolute differences in restricted mean survival time (RMST) at a pre-specified time. Other estimands can certainly be appropriate. LR is not appropriate for testing for a differential treatment effect nor can it directly estimate any of the contrasts in section "Prediction-driven RCT designs and contrasts of interest". However, it is the most commonly used statistic for testing for differences in survival between two groups in confirmatory RCTs for cancer treatment. The HR is an estimand of the relative difference representations for each of the contrasts in section "Prediction-driven RCT designs and contrasts of interest", while the SD and RMST are estimands of the absolute difference representations. For estimation of SD and RMST, a pseudo-observation technique developed by Andersen et al. [20] is used. We chose this method of estimation due to its non-parametric modelling of the outcome and ease and flexibility of incorporating covariates, which is useful when estimating differential treatment effect and/or including baseline covariates predictive of outcome; other estimators are of course possible. Logrank statistic The well-known logrank statistic is often used to test whether censored time-to-event distributions are different between two groups [21, 22]. Let S1 and S2 be survival distributions for groups 1 and 2, respectively. The null hypotheses corresponding to the contrasts in section "Prediction-driven RCT designs and contrasts of interest" are \(H_0:S_{X = B,M = m} = S_{X = A,M = m}\) for the treatment effect in a single subgroup and \(H_0:S_{{{{{{{{\mathrm{biomarker - directed}}}}}}}}} = S_{{{{{{{{\mathrm{physician - directed}}}}}}}}}\) for clinical utility of a biomarker signature. LR does not provide a measure of the magnitude of survival differences. It can only test that survival distributions are equal. The logrank statistic is also not appropriate for testing for a differential treatment effect, because that test is of the null hypothesis that all four survival distributions are equal. If there is an equal, non-zero treatment effect in both subgroups, then the null hypothesis that the LR tests is false despite the absence of a differential effect. Hazard ratio The hazard ratio (HR) is a common estimand of a magnitude of differences in survival using censored data that compares hazards, or the instantaneous risks of death (progression or failure), between two groups. Cox proportional hazards regression, proposed by Cox et al. [23], is used to estimate HR in our simulations below, which is valid for estimating a treatment effect under three conditions: the hazard of death (progression or failure) is the same for censored and uncensored subjects, at all times t the hazards in each group are proportional to the other hazards at all times t both TA and TB are independent of X, as is the case in a confirmatory RCT for cancer treatment One of the benefits of using the Cox model to estimate HR is that it allows for the adjustment of covariates by including them in the model. In this way, we can estimate the differential treatment effect as in (6). When estimating the contrasts in section "Prediction-driven RCT designs and contrasts of interest", g(·) is the hazard, and the following Cox models will be fit, $$\lambda \left( {t;X} \right) = \lambda _0\left( t \right)e^{\beta _0 \cdot I\left( {X = B} \right)}$$ $$\lambda \left( {t;X,M} \right) = \lambda _0\left( t \right)e^{\beta _0 \cdot I\left( {X = B} \right) + \beta _1 \cdot M + \beta _2 \cdot I\left( {X = B} \right) \cdot M}$$ $$\lambda \left( {t;ARM} \right) = \lambda _0\left( t \right)e^{\beta _0 \cdot I\left( {{{{{{{{\mathrm{biomarker - directed}}}}}}}}} \right)}$$ where λ0(t) represents the baseline hazard. Equation (5) estimates the treatment effect within a single subgroup as \(e^{\beta _0}\), (6) estimates the differential treatment effect as \(e^{\beta _2}\), and (7) estimates the clinical utility of the biomarker signature as \(e^{\beta _0}\). The corresponding tests for statistical significance are based on \(H_0:\beta _0 = 0\), \(H_0:\beta _2 = 0\) and \(H_0:\beta _0 = 0\), respectively. The disadvantages of estimation via the hazard ratio are the reliance on the proportional hazards assumption and the complex interpretation [24, 25]. There are formal tests to indicate evidence of a departure from proportional hazards, and any procedure to test the assumption of proportional hazards and modify the analysis if needed should be carefully pre-specified in the trial study analysis plan prior to unblinding of treatment assignment. Hernan details the implications of deviations from proportional hazards on estimation and interpretation [26]. Difference in survival probability at a pre-specified time The difference in survival probabilities (SD) estimand is the difference of two groups' probability of surviving past a specified time point, that is, $$SD\left( t \right) = P\left( {T_1 \, > \, t} \right) - P\left( {T_2 \, > \, t} \right)$$ for groups 1 and 2. SD can be estimated using a pseudo-observation technique developed by Andersen et al. [20], which uses non-parametric, Kaplan–Meier-based modelling of right-censored survival data while allowing for the adjustment of covariates [27]. Klein et al. applies this technique to the comparison of survival probabilities at fixed time points, and shows that it works well when incorporating covariates and when the proportional hazards assumption is violated [28]. Overgaard et al. presents the asymptotic theory of the pseudo-observation technique and proves that the estimating procedure used by Klein et al. is consistent under a condition of completely independent censoring, meaning that censoring is independent of event time, event type and covariates [29]. The pseudo-observation technique for censored time-to-event variables is as follows. Let Ti, i from \(1,...,n\), be independent and identically distributed time-to-event variables, and let θ be the expected value of some function of Ti, that is \(\theta = E\left[ {f\left( {T_i} \right)} \right]\) where θ may be multivariate. Also assume that there is an consistent estimator of θ, \(\hat \theta\), and suppose there are measured covariates Zi. The ith pseudo-observation is defined by, $$\hat \theta _i = n \cdot \hat \theta - \left( {n - 1} \right)\hat \theta ^{ - i}$$ where \(\hat \theta ^{ - i}\) is the "leave-one-out" estimator for θ. Regressing on Z now corresponds to specifying the relationship between θi and Zi using a generalised linear model with link function \(\phi \left( \cdot \right)\), $$\phi \left( {\theta _i} \right) = \beta ^T{{{{{{{\mathbf{Z}}}}}}}}_i$$ To estimate SD, one sets \(f\left( {T_i} \right) = I\left( {T_i \, > \, t} \right)\), lets \(\phi \left( \cdot \right)\) be the identity link function, and computes the pseudo-observations at a single time point, t. The pseudo-observations are then regressed according to the following models, $$\theta _i\left( t \right) = \beta _0 + \beta _1 \cdot I\left( {X = B} \right)$$ $$\theta _i\left( t \right) = \beta _0 + \beta _1 \cdot I\left( {X = B} \right) + \beta _2 \cdot M + \beta _3 \cdot I\left( {X = B} \right) \cdot M$$ $$\theta _i\left( t \right) = \beta _0 + \beta _1 \cdot I\left( {{{{{{{{\mathrm{biomarker}}}}}} \!-\! {{{{{\mathrm{directed}}}}}}}}} \right)$$ where (8) estimates treatment effect for a subgroup as β1, (9) estimates differential treatment effect as β3, and (10) estimates clinical utility of the biomarker signature as β1. The corresponding tests for statistical significance are based on \(H_0:\beta _1 = 0\), \(H_0:\beta _3 = 0\) and \(H_0:\beta _1 = 0\), respectively. Although not dependent on the proportional hazards assumption, SD may be heavily dependent on the time point chosen to compare survival probabilities. When survival distributions cross, estimating SD at different times, even without bias, can provide qualitatively different results. Restricted mean survival time (RMST) at a pre-specified time Restricted mean survival time (RMST), proposed by Irwin [30], is also estimated using the pseudo-observation technique, which was extended to estimate RMST by Andersen et al. [31]. RMST is defined as the average survival time up to time t, that is, \(E\left[ {min\left( {T,t} \right)} \right]\). It can also be expressed as the area under a survival curve up to time t, that is, $${{{{{\mathrm{RMST}}}}}} = {\int}_0^t {S\left( u \right)du}$$ and the estimate of RMST is, $$\hat \theta = {\int}_0^t {\hat S\left( u \right)du}$$ To estimate RMST, one sets \(f\left( {T_i} \right) = min\left( {T_i,t} \right)\), lets \(\phi \left( \cdot \right)\) be the identity link, and computes the pseudo-observations at a single time point, t. The regression equations used to estimate the contrasts of interest in section "Prediction-driven RCT designs and contrasts of interest" are similar to (8)–(10). So then (8) estimates treatment effect for a single subgroup as β1, (9) estimates differential treatment effect as β3, and (10) estimates clinical utility of the biomarker signature as β1. The null hypotheses for testing for statistical significance are as in section "Difference in survival probability at a pre-specified time". This method of estimating RMST compares entire survival curves up to a time point while not relying on the proportional hazards assumption nor that survival curves do not cross. RMST may still be sensitive to the choice of t, because it ignores all information after t. However, it is not as sensitive as SD, because unlike SD, RMST incorporates all information in the survival function up until time t. Simulations similar to those proposed by Rubenstein et al. [32] are used to closely simulate the conduct of actual prediction-driven, comparative effectiveness RCTs using time-to-event outcomes, providing estimates of relevant operating characteristics under assumed/estimated parameters. The code for the simulations is written in R and is publicly available at github.com/Adam-Brand/Prediction_Driven_Trials. The Supplement details the steps taken for the simulations in this paper as well as example simulations for each of the estimands of interest. These simulations can be used to design specific RCTs based on estimated/assumed inputs such as minimum clinically beneficial treatment effect and expected event rate. To illustrate the potential for error in assessing clinical utility using the experimental estimand in the comparative effectiveness setting, we conducted a simulation with ideal physician's choice of treatment. This means that the physician will always prescribe treatment according to the marker-directed treatment strategy and can occur if there are other readily-available patient data, which can accurately predict the biomarker status of the patient without obtaining biomarker status. As discussed in the introduction, this represents a scenario where the true clinical utility is zero, because knowledge of the biomarker cannot improve treatment outcomes. Survival times are generated as independent draws from an exponential distribution, where median survival time is set separately for each of the four subgroups defined by biomarker status and treatment assignment, and survival times are independent of any other factors. Median survival for positive patients on treatment A, negative patients on treatment B and negative patients on treatment A is set to 9, 9, and 12 months, respectively. Median survival for positive patients on treatment B is set to 9, 12 and 21 months in different scenarios. Estimation is based on 1000 trials for each scenario, with varying effect sizes and proportion of biomarker positives. Table 2 presents results comparing the experimental contrast for clinical utility to the proposed contrast for clinical utility in the comparative effectiveness setting. As discussed above, true clinical utility in this scenario is zero, so the true HR is 1 and the true RMST and SD are zero. As shown, using the experimental contrast for clinical utility drastically inflates the 0.05 type 1 error and distorts the true measures using any estimation method. Using the proposed contrast for clinical utility generally maintains the desired type 1 error under the null in this comparative effectiveness setting and estimates the true clinical utility accurately using any estimation method. Table 2 Comparing contrasts of clinical utility over different estimands in the comparative effectiveness setting with ideal physician choice (no clinical utility). Guidance for designing prediction-driven comparative effectiveness RCTs It is important to define the target contrast and simulate using the design(s) able to directly identify that contrast when designing an RCT. Realistic simulations should be used based on estimated parameters, when possible, to achieve the desired operating characteristics, and the final design choice should be conservative with respect to those simulation results, that is, favour a larger sample size. Details of how to conduct a realistic simulation in this setting and examples of such simulations for each of the contrasts of interest are provided in the supplement. These simulations can be used to design specific RCTs based on estimated/assumed inputs such as minimum clinically beneficial treatment effect and expected event rate. Previous literature has argued against the use of the biomarker-strategy design due to inferior efficiency, defined as larger sample size to achieve equal power [17, 18, 33]. However, the definition of clinical utility in these comparisons is questionable for the comparative effectiveness setting, comparing a biomarker-directed treatment arm to either single option treatment, that is, ignoring the treatment effect in one of the subgroups, or comparing to randomised treatment, which is not the standard of care in the comparative effectiveness setting. Defining clinical utility this way allows for estimation of clinical utility using the biomarker-stratified design, but as shown in Table 2, can produce results far from the truth. In the comparative effectiveness setting, the definition proposed in section "Prediction-driven RCT designs and contrasts of interest" should be used to provide a direct, interpretable estimate of clinical utility. The biomarker-strategy design is the only design in the literature able to directly estimate this definition of clinical utility. We provide four useful options for quantifying the contrasts of interest in the comparative effectiveness setting for cancer research. Other options can be appropriate or even superior depending on the specific setting. Reasonable estimating options should be vetted through accurate simulations similar to those described in the supplement. Practitioners are encouraged to use simulations based as closely as possible to their setting to determine the best estimator in their setting given their contrast of interest. We propose a definition and a set of estimands for quantification of clinical utility that are appropriate for the comparative effectiveness setting. We motivate the use of our proposed definition and estimands for clinical utility in this setting in comparison to an estimand and definition previously proposed and used in the experimental literature. We explain and demonstrate in simulations why these two concepts of clinical utility differ. We highlight that the RCT designs able to directly estimate estimands under this definition of clinical utility are not as previously reported in the experimental literature. We define some possible estimands for this contrast of interest in this setting, describe the RCT designs able to estimate them, and evaluate viable options for estimation. We additionally consider other contrasts that may be of interest in the comparative effectiveness settings, suggesting estimands, estimators and RCTs designs that are useful for them. Using these illustrations and demonstrations of the contrasts, we provide guidance for the use of prediction-driven RCTs in the comparative effectiveness setting for cancer research. We also provide a guide to realistic trial simulation and guidance for designing such RCTs using simulations. Although our simulations and discussion involve only two treatment options and two biomarker-defined subgroups, the designs and analyses can easily be extended to multiple treatment options and subgroups. Following the steps in the simulation guide in the supplement can provide estimated operating characteristics for trials with several treatment options and/or subgroups. We call attention to the fact that previous definitions of clinical utility in the experimental RCT setting may lack interpretability/usefulness. Future research should explore the definition of clinical utility further in the experimental setting. Another potential area of future research is the estimation of the above contrasts in observational data; this would extend the work of Sachs et al. [15]. All code for the simulations, including the code to simulate the datasets, is publicly available at github.com/Adam-Brand/Prediction_Driven_Trials. Slamon D. Herceptin: increasing survival in metastatic breast cancer. Eur J Oncol Nurs. 2000;4:24–9. Paik S. Clinical trial methods to discover and validate predictive markers for treatment response in cancer. Biotechnol Annu Rev. 2003;9:259–67. Conley BA, Taube SE. Prognostic and predictive markers in cancer. Dis Markers. 2004;20:35–43. Taube SE, Jacobson JW, Lively TG. Cancer diagnostics. Am J Pharmacogenomics. 2005;5:357–64. Sequist LV, Bell DW, Lynch TJ, Haber DA. Molecular predictors of response to epidermal growth factor receptor antagonists in non–small-cell lung cancer. J Clin Oncol. 2007;25:587–95. Bonomi PD, Buckingham L, Coon J. Selecting patients for treatment with epidermal growth factor tyrosine kinase inhibitors. Clin Cancer Res. 2007;13:4606s–12s. Mandrekar SJ, Sargent DJ. Predictive biomarker validation in practice: lessons from real trials. Clin Trials. 2010;7:567–73. Renfro LA, Mallick H, An M-W, Sargent DJ, Mandrekar SJ. Clinical trial designs incorporating predictive biomarkers. Cancer Treat Rev. 2016;43:74–82. Hu C, Dignam JJ. Biomarker-driven oncology clinical trials: Key design elements, types, features, and practical considerations. JCO Precis Oncol. 2019;1:1–12. Mandrekar SJ, Sargent DJ. Molecular diagnostics. In: Jorgensen JT, Winther H, editors. New York: Jenny Stanford Publishing; 2019. pp. 227–50. Woosley R, Cossman J. Drug development and the FDA's Critical Path Initiative. Clin Pharm Ther. 2007;81:129–33. Flaherty KT, Gray RJ, Chen AP, Li S, McShane LM, Patton D, et al. Molecular landscape and actionable alterations in a genomically guided cancer clinical trial: National Cancer Institute Molecular Analysis for Therapy Choice (NCI-MATCH). J Clin Oncol. 2020;38:3883–94. Crippa A, De Laere B, Discacciati A, Larrson B, Connor JT, Gabriel EE, et al. The ProBio trial: molecular biomarkers for advancing personalized treatment decision in patients with metastatic castration-resistant prostate cancer. Trials. 2020;21:1–10. Le Tourneau C, Delord JP, Goncales A, Gavoille C, Dubot C, Isambert N, et al. Molecularly targeted therapy based on tumour molecular profiling versus conventional therapy for advanced cancer (SHIVA): a multicenter, open-label, proof-of-concept, randomized, controlled phase 2 trial. Lancet Oncol. 2015;16:1324–34. Sachs MC, Sjölander A, Gabriel EE. Aim for clinical utility, not just predictive accuracy. Epidemiol (Combridge, Mass). 2020;31:359. Shih WJ, Lin Y. On study designs and hypotheses for clinical trials with predictive biomarkers. Contemp Clin Trials. 2017;62:140–5. Shih WJ, Lin Y. Relative efficiency of precision medicine designs for clinical trials with predictive biomarkers. Stat Med. 2018;37:687–709. Sargent DJ, Conley BA, Allegra C, Collette L. Clinical trial designs for predictive marker validation in cancer treatment trials. J Clin Oncol. 2005;23:2020–7. Pearl J. Causality. Cambridge New York, NY: University Press; 2009. Andersen PK, Klein JP, Rosthøj S. Generalised linear models for correlated pseudo-observations, with applications to multi-state models. Biometrika 2003;90:15–27. Nelson W. Hazard plotting for incomplete failure data. J Qual Technol. 1969;1:27–52. Aalen O. Nonparametric inference for a family of counting processes. Ann Statist, 1978;6:701–26. Cox DR. Regression models and life tables (with discussion). J R Stat Soc. 1972;34:187–220. Aalen OO, Cook RJ, Røysland K. Does Cox analysis of a randomized survival study yield a causal treatment effect? Lifetime Data Anal. 2015;21:579–93. Martinussen T, Vansteelandt S, Andersen PK. Subtleties in the interpretation of hazard contrasts. Lifetime Data Anal. 2020;26:833–55. Hern´an MA. The hazards of hazard ratios. Epidemiology. 2010;21:13. Klein JP, Gerster M, Andersen PK, Tarima S, Perme MP. SAS and R functions to compute pseudo-values for censored data regression. Comput Methods Prog Biomed. 2008;89:289–300. Klein JP, Logan B, Harhoff M, Andersen PK. Analyzing survival curves at a fixed point in time. Stat Med. 2007;26:4505–19. Overgaard M, Parner ET, Pedersen J. Asymptotic theory of generalized estimating equations based on jack-knife pseudo-observations. Ann Stat. 2017;45:1988–2015. Irwin J. The standard error of an estimate of expectation of life, with special reference to expectation of tumourless life in experiments with mice. Epidemiol Infect. 1949;47:188–9. Andersen PK, Hansen MG, Klein JP. Regression analysis of restricted mean survival time based on pseudo-observations. Lifetime Data Anal. 2004;10:335–50. Rubinstein LV, Gail MH, Santner TJ. Planning the duration of a comparative clinical trial with loss to follow-up and a period of continued observation. J Clin Epidemiol. 1981;34:469–79. Simon R, Maitournam A. Evaluating the efficiency of targeted designs for randomized clinical trials. Clin Cancer Res. 2004;10:6759–63. AB, MCS, AS and EEG are partially funded by The Swedish Research Council and AB and EEG are partially funded by the Swedish Cancerfonden. Open access funding provided by Karolinska Institute. Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Solna, Sweden Adam Brand, Michael C. Sachs, Arvid Sjölander & Erin E. Gabriel Section of Biostatistics, Department of Public Health, University of Copenhagen, Copenhagen, Denmark Michael C. Sachs & Erin E. Gabriel Arvid Sjölander All authors contributed to the theoretical concepts contained in this manuscript. AB wrote the simulation code and drafted the manuscript. All authors revised the manuscript, approved the final version, and agreed to be accountable for all aspects of the work. Correspondence to Adam Brand. Supplement text Figure S1 Table S1 Brand, A., Sachs, M.C., Sjölander, A. et al. Confirmatory prediction-driven RCTs in comparative effectiveness settings for cancer treatment. Br J Cancer (2023). https://doi.org/10.1038/s41416-023-02144-x
CommonCrawl
Code fusion information-hiding algorithm based on PE file function migration Zuwei Tian ORCID: orcid.org/0000-0002-9555-80791 & Hengfu Yang1 EURASIP Journal on Image and Video Processing volume 2021, Article number: 2 (2021) Cite this article PE (portable executable) file has the characteristics of diversity, uncertainty of file size, complexity of file structure, and singleness of file format, which make it easy to be a carrier of information hiding, especially for that of large hiding capacity. This paper proposes an information-hiding algorithm based on PE file function migration, which utilizes disassembly engine to disassemble code section of PE file, processes function recognition, and shifts the whole codes of system or user-defined functions to the last section of PE file. Then it hides information in the original code space. The hidden information is combined with the main functions of the PE file, and the hidden information is coupled with the key codes of the program, which further enhances the concealment performance and anti-attack capability of the system. PE file is a standard format for executable file in Windows environment, which is one of the most important software formats in the Internet. The code section is the most important section in the PE file, which is used to store the executable instruction codes, including user-defined function code and static link library function code, which is the main part of the PE file. Combining hidden information with program instruction code can effectively improve the concealment of information hiding algorithm based on executable file. At present, the PE-based information-hiding algorithms are divided into the following three categories: One is the information hiding method based on the PE file redundant space [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. The second is the information hiding method based on PE file data resources [21,22,23], the third is the information-hiding method based on PE file import table [24,25,26,27,28]. The existing PE file hiding algorithms mainly exist the following shortcomings: First, the redundant space of PE files is open to people familiar with the PE file format, and there are powerful PE file analysis tools on the market, such as Stud_PE and PE Explorer Lord PE. Obviously, because of the use of the redundant space inherent in PE files for information hiding, security is not good. The second is that the hidden space is too concentrated, the hidden information is easily exposed, and the concealment is poor. The third is the structure of the PE file is transparent; the use of PE files structure characteristics to hide information, once the transformation of its structural characteristics, hidden information will be destroyed. Fourth, the hidden information is not combined with the program function, there is no close association with the program itself, the hidden information and program instruction code is low coupling, the ability to resist deletion, modification, filing, and other attacks is poor [29,30,31]. This paper proposes a highly concealed information-hiding algorithm based on the migration of PE file code section functions, which enhances concealment and improve the system's anti-attack capability, on the basis of fully analyzing the characteristics of PE file code section. First, through the disassembling engine disassemble PE file code section, the function recognition algorithm is used to identify the standard functions in the program, and then the function modules in the program are migrated to the redundant space or the last section of the code section. In this way, the hidden information and program instruction code closely combined, greatly improve the system's concealment and anti-attack ability. The rest of the paper is organized as follows: In section 2 analyses the structure of the PE file code section. Section 3 describes the proposed method. Section 4 is experimental results and discussion. Finally, section 5 summarizes the paper. PE file code section anatomy Key data structures of code section The section name of the code section in the PE file is generally .code (or .txt, which is related to the compiler), and its property value is 0x60000020, indicates that the section is executable and readable, and contains instruction codes, which is generally located next to the section table. It is the first section of the PE file, in front of other sections. The data structure related to the code section is VirtualSize field, VirtualAddress field, PointerToRawData field, and SizeOfRawData field. When the PE file is loaded, the loader of the Windows operating system continuously reads SizeOfRawData bytes of data, the entire section of code data, from the PE file offset to the position of PointerToRawData, and maps it to memory. Data organization of code section The field value PointerToRawData in the code section table indicates the offset address of the code section in the disk file, and SizeOfRawData indicates the amount of disk space taken up by the code section after the file is aligned, as the disk file space is generally aligned by 512 bytes. The space that the code section actually occupies (the value of SizeOfRawData) is larger than the value of VirtualSize (unaligned), thus creating the redundant space of the code section, and the difference between the value of SizeOfRawData and VirtualSize is the size of the redundant space of the code section. In the code section, the redundant space is filled with 0. Setting Sr as the size of the redundant space for the section: $$ {\mathrm{S}}_{\mathrm{r}}=\mathrm{SizeOfRawData}-\mathrm{VirtualSize} $$ Figure 1 is the code section structure of the PE file generated by the VC++ compiler. Code section structure of PE file Determination of the entry address of program In general, when the PE file is loaded by the Windows operating system loader, the code section is loaded onto the 0x00401000 address (the value of ImageBase plus the value of VirtualAddress), AddressEntryOfPoint of the HEADER32 structure indicates the address of the program executable code entry point, that is, the RVA of the first instruction to be executed in the PE file, and its value plus the base address in the PE file memory, is the starting virtual address of the entry function of the program when it runs. For example, the AddressEntryEntryPoint value of a PE file is 0x0001120D, the entry address of the program is 0x0041120D. Some of the programs that insert code into PE files are to modify the address here to point to their own code, and then jump back after being executed. The IAT is located before the module entry point in the .text segment (IAT table is actually a collection of jump instructions). When the Windows loader loads the executable program into the address space of the process, the actual memory address for each import function is determined, and the IAT table is also determined. Proposed method In this section, we depict the proposed information-hiding scheme using PE file function migration, and then it hides information in the original code space. There is usually at least a code section in a PE file, which holds executable code. The function of a program is achieved by executing instructions in a code section. Therefore, hiding information in the PE file code section by combining hidden information with instruction code, which can effectively improve concealment and anti-offensive. But directly hiding the information in the PE file code section, some of the hidden information will be converted into some extremely abnormal instructions when being disassembled, which is easy to arouse the suspicion of attackers. In order to improve the resistance to disassembly and other reverse analysis tools, we will convert hidden information to instructions, disguised as a function (functionalization), embedded into the code section. The hidden information and PE file executable code are integrated, which improves the concealment. At the same time, in order to solve the problem of over-concentration of hidden information, a method of migrating one or more functionally independent modules (functions) in the executable code of PE files to redundant spaces in the code section is proposed, and the information is hidden between the normal function instruction code, so that the hidden information and PE file executable code are closely integrated. It further enhances the concealment and security of the system. Disassembly algorithm The principle of disassembly software is to first identify the format of the executable file, distinguish the code and data, determine the file offset address at the entry point of the code section, then utilize the knowledge of lexical analysis and grammar analysis to analyze, decode according to the instruction format of the X86 architecture, and finally output the corresponding assembly instructions. Disassembly technology can be divided into static disassembly and dynamic disassembly. Static disassembly refers to the conversion of the target program into the corresponding assembly language program without executing the target program. Dynamic disassembly refers to tracking the execution of the target program, in the process of execution disassembling the target program. One advantage of static disassembly is that the entire target program can be processed at once, while dynamic disassembling can only handle the parts to which the target program is executed. Currently, the commonly used disassembly tool software are IDA Pro, Ollydbg,Win32Dasm, SoftICE, Windbg, etc. Design and implementation of disassembler The role of the disassembling engine is to translate machine codes into assembly instructions. Developing an excellent disassembly engine requires an in-depth understanding of machine instruction coding for Intel's X86 architecture, with a long development cycle. Common open-source disassembling engines are udis86, Proview, ade, xde, etc. [28]. OllyDbg's own disassembling engine is also relatively powerful, but its instruction set is incomplete and does not support MMX and SSE well. We use Udis86 to build a disassembler, the main steps of which are as follows: Step 1: Deploy the code and header files of the Udis86 disassembling engine to the system or directly into the project, refer to the "udis86.h" header file. Step 2: Define an Udis86 object (ud_t ud_obj), set disassembly mode to 32 bits, set the instruction format for intel instruction format, set the start address of the first instruction, set the input source, which can be memory, or use ud_set_input_ file is set directly to file input and other initialization work. Step 3: Looping, disassembling all the instructions in the input source. Step 4: To carry out instruction analysis. Step 5: Record the results of instruction analysis. The result of disassembly is the same as that of OllyDbg using the built disassembler to disassemble the writing board program of the system (write.exe). The high-quality disassembler is the basis for further function recognition. Function identification and location In the process of application programming, modular programming is usually adopted. According to the top-down method, the program is broken down into many functional independent modules, each independent module is implemented by a function. In order to implement some complex functions, a large number of library functions are provided by the system, including static link library functions and dynamic link library functions. Static link library functions include system library functions and dedicated library functions. During compiling and linking, just like the user-defined function, the code will be linked to the target code of the executable program. For the dynamic link library function called, the target code is not in the executable file but in a DLL file. According to statistics, library function code accounts for an average of 50-90% of the target code in programs written by advanced languages [32]. In order to further improve concealment and integrate hidden information with the program's key function code, we propose an information-hiding algorithm for migrating function code, and the recognition and location of functions is the basis for this algorithm. After disassembling the target program code section, according to the compilation principle and the specification of the function call, the starting address of the function module is generally the value of the address expression after the CALL instruction, that is, if there is an instruction CALL ADDR in the assembly code, there must be a function module with ADDR as the starting address. The function module ends with the RET instruction. Since there may be multiple exits in the function module (multiple RET instructions), according to the characteristics of the function, the end address of the function can be determined by the following algorithm: Function: Determine the end address of function module. Input: Starting address of the function module (F_begin). Output: End address of the function module (F_end). Through the address expression after CALL instruction and the above algorithm, the start address and the end address of a function module can be determined, and the length of the function instruction code can be calculated. Using this algorithm to test the function of notepad.exe and thunder program thunder.exe, the experiment shows that 59 functions can be effectively identified from "notepad.exe" program, and 4086 functions can be effectively identified from "thunder.exe" program. It can meet the needs of migration function well (Table 1). Because the purpose of function recognition in our system is to migrate function to implement information hiding, we have simplified the function recognition algorithm, for some special functions and functions with short code, will be ignored in the algorithm, which does not affect the effectiveness of the algorithm and information hiding. If the amount of information to hide is large, you can hide the information in the extended function area by extending the length of the migrated function, or, after the information to be hidden is functionalized, stored in the last section of the PE file, scattered between the two migrated functions. Table 1 Example of function recognition Function migration In order to closely combine the hidden information with the instruction code of the executable, we propose an information-hiding algorithm for function migration that hides the information in the storage area of the original function module by migrating the function module in the target program to the last section. Function recognition is the basis for function migration, locating function by function recognition, and determining the file addresses of function (including function start address and end address) in disk file, relative virtual addresses and length of function, then correcting the relevant instructions in the function module and overwriting the relevant property values of the PE file. The migration of functions can be implemented. Because the target program code section at the time of the link holds the static library function code that is called first, followed by the code for the user-defined function. In order to improve concealment, user-defined functions are preferred when selecting the migrated functions. Let OFFSETold and OFFSETnew represent original offset and new offset of the call instruction, respectively. RVAold indicates the relative virtual address before the CALL instruction being migrated. RVAnew indicates the relative virtual address after the being migrated. SECTIONold and SECTIONnew represent the actual size of the section before and after the migrated function (VirtualSize), and len(P) represents the length of the P function instruction code, respectively. The main steps of the function migration algorithm are as follows: Step 1: The function is located by the function recognition algorithm, and the selected function module to be migrated is read into memory. Step 2: Locate at the end of the last section (the PointerToRawData value of the last section plus the value of the actual size of the last section, VirtualSize), write the starting address of the function to be migrated (located through that address when extracting information), and then write the instruction codes of the function to be migrated. Step 3: Fix the address value in the CALL instruction inside the function after being migrated. $$ {\mathrm{OFFSET}}_{\mathrm{new}}={\mathrm{OFFSET}}_{\mathrm{old}}+{\mathrm{RVA}}_{\mathrm{old}}-{\mathrm{RVA}}_{\mathrm{new}} $$ Step 4: Fix the size of the section and align the SizeOfRawData value of the section by FileAlignment. $$ {\mathrm{SECTION}}_{\mathrm{newsize}}={\mathrm{SECTION}}_{\mathrm{oldsize}}+\mathrm{len}\left(\mathrm{P}\right)+4 $$ Step 5: Fix the PE file mirror size, mirror size is aligned according to the value of SectionAlignment. Step 6: Change the section property to executable. Step 7: Set the relocation table size to 0. Step 8: Write a jump instruction at the beginning of the original function, which jumps to the start address of the migrated function. It ensures that the function migration does not affect the function of the program through function migration and modifying the migrated function, so that the area occupied by the original function module can be used for information hiding, and the hidden information is tightly coupled with the key code of the executable program, which can effectively improve the concealment and security of the system (Fig. 2). Example of information hiding by function migration Information-hiding algorithm After function recognition and function migration are completed, the information hiding is relatively simple, and its main steps are described below: Input: Original carrier PE file P, information to be hidden M, public key pk. Output: Hidden PE P′ file. Step 1: Using the public key pk and asymmetric encryption algorithm RSA, the hidden information M is encrypted and the encrypted information M' = Encrypt (pk, M) is obtained. Step 2: The code section of the original carrier PE file P is disassembled by using the disassembling engine. Step 3: Use the function recognition algorithm to recognize function of the assembly code produced by step 2, record the start address and end address, length of each identified function, count the sum of the number of functions and function lengths of all the identified functions, and sort the number by the size of the function by the starting address. Step 4: According to the length of the information to be hidden, move a function from small to large of function number to the end of the last section, and write the first 4 bytes of the start address of the function module to the original function address after migration. Step 5: Write a jump instruction at the beginning of the original function, jump to the beginning of the migrated function, and then write the length of the hidden information and the hidden information. Step 6: Determine whether the information is all hidden, if it is to turn to step 7, otherwise turn to step 4 repeat the same operation. Step 7: Modify the size of the PE file section, the size of the mirror, and change the properties of the section to be executable. Information extraction algorithm Input: A PE file P' with hidden information, private key sk. Output: Hidden information M. Step 1: Move 4 bytes forward from the end of the last section and record the current pointer position SectionAddr. Step 2: With SectionAddr as the starting address, read the contents of 4 cells as the address value Adrr, and determine whether the address is a JMP instruction for the unit where Adrr is located, or then the value of SectionAddr minus 1, continue to scan forward. Step 3: If the jump instruction jumps exactly to the location where SectionAddr 4 is located, then the location of SectionAddr 4 is the post-migration function, and the starting address of the original function is in the two-word unit where SectionAddr is located, and then the transfer step 4, Otherwise, the value of Section Addr is reduced by 1, read 4 bytes in a row, and continue to scan forward. Step 4: Read the starting address of the original function from the two-word unit where SectionAddr is located, skip the JMP instruction, read the length of the hidden information Len, and begin to extract hidden pieces of information that are len bytes in length. Step 5: Determine whether the value of SectionAddr points to the beginning of the last section, and if so, the reverse scan ends, otherwise the value of SectionAddr is reduced by 4 and then transferred to step 2. Step 6: The extracted pieces of M' secret information are reversed into secret information. Step 7: Using the private key sk and asymmetric encryption algorithm RSA, the watermark information M' is decrypted. The decrypted information M = DeEncrypt(sk, M') is obtained (Where M is plaintext). Experimental results and discussion Results of the experiment The PE files used in the experiment consist of three different types of files: some from the windows operating system's own applications, located in the windows system32 folder, such as notepad.exe, write.exe, winmine.exe, etc. Part of it is a common desktop application for users, such as qq.exe, thunder.exe, winRAR, 360sd.exe, and part of the application written for yourself. From which 200 PE files are randomly selected as test programs for experimentation. In the experiment, the watermark information is embedded in all functions identifiable in each tester. The results of the experiment are as follows: As can be seen from Table 2, in general, the larger the file, the stronger the function, the more functions are recognized, the greater the hidden capacity. Table 2 Embedded capacity and bit rate of function migration methods Covert analysis The function migration method suggested in this paper moves the recognized function module to the last section to hide information in the original function code area. Using the services provided by the www.virscan.org website, the hidden PE file upload server will be hidden for virus scanning, the results show that the file is normal (the website provides up to 37 types of antivirus engines). Ability to resist the detection of common anti-virus software and the analysis of static reverse analysis tools. Embedded capacity The embedded capacity of a normal function migration method is related to the size of the PE file and the number of static library functions called in the file. In general, the larger the PE file, the more complex the function, the more static library functions are called, the more functions that can be identified and can be migrated, and the greater the embedded capacity. Anti-filling attack experiment Hiding information in the redundant space of the PE file, there are insufficient gaps in the hidden information that is too centralized, hidden location is easy to expose, hidden capacity is small, and the hidden information will be destroyed by filling the known redundant space with full 0 or full 1. Extending the last section of the PE file or adding a section to hide information, while solving the problem of hidden capacity, but as with the use of redundant space for information hiding, there is an over-concentration of information, hidden location disclosure problems, and because there is no integration with the program's main functional code, Using a full 0 or full 1 to fill forward from the end of the last section will break the hidden information, but the program will still function properly. The function-based method is to hide the information in the original function code area by migrating the function code of the recognized system function or user-defined function to the last section of the PE file. Because the information is hidden in the code area of the original function module of the PE file, when using full 0 or full 1 to fill the attack forward from the end of the last section, the hidden information is not broken, while the fill attack will destroy the original function code that is migrated, resulting in the program not being able to run. Take notepad.exe, a notepad.exe that comes with the Windows operating system, for example, after using the function migration method to hide the information, the program can function properly and extract the hidden information. The traditional method is to hide the secret information in the redundant space, data resource segment, and import table of PE file. There are some shortcomings, such as the known redundant space, the too concentrated hidden space, the easy destruction of hidden information, and the loose association between the hidden information and the key code of the program. Compared with the previous methods, the method proposed in this paper overcomes their shortcomings. Our method is to fuse the secret information with the instruction code of the program through function migration and store it in the code segment. The hidden information is scattered, and the adversary is difficult to determine the location of the secret information and instruction code, and the hidden information is coupled with the key code of the program. Once the secret information in the program is destroyed, the program will not be executed correctly. So, it is more secure and capable of resisting attacks than the previous methods. Conclusion and future work In this paper, a large-capacity information hiding algorithm based on function migration is presented. The PE file code section is disassembled through the disassembling engine processes functions recognition, and shifts the codes of recognized function. The design implements an algorithm that hides information by migrating the functional code of an identified static library function or user-defined function to the last section of the PE file. In this way, the hidden information is combined with the main functional code of the PE file, and the hidden information is coupled with the key code of the PE file, which further enhances the concealment and anti-attack of the system. The theoretical analysis and experimental results show that, compared with similar algorithms, the proposed algorithm integrates the information to be hidden with the program instruction code through function migration, and the algorithm hides the capacity and concealment, strong ability to resist attacks. The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. Portable executable IAT: Import address table Relative virtual address DLL: Dynamic link library MMX: Multimedia extensions SSE: Streaming SIMD extensions Z. Wu, S. Feng, J. Ma, Information hiding scheme and implementation of PE file. Comput. Eng. Appl. 41(27), 148–150 (2005) R. El-Khalil, A.D. Keromytis, Hiding information in program binaries, Proc of the 6th International Conference on Information and Communications Security (Springer, Berlin, 2004), pp. 287–291 R.K. Tiwari, G. Sahoo, A novel steganographic methodology for high capacity data hiding in executable files. Int. J. Internet Technol. Secured Trans. 3(2), 210–222 (2011) S.B. Che, S. Jin, G.W. Ling, in International Conference on Computer Science and Education (ICCSE10). Software watermark research based on portable execute file (Hefei, 2010), pp. 1367–1372 Z. Sha, H. Jiang, A. Xuan, in the 3rd International Conference on Genetic and Evolutionary Computing (WGEC09). Software watermarking algorithm by coefficients of equation (Guilin, 2009), pp. 410–413 X. Wang, Y. Wang, X. Zhang, et al., Research on PE file software watermark against similarity attack. Netw. Secur. Technol. Appl., 82–84 (2007) A.A. Zaidan, B.B. Zaidan, A.W. Naji, et al., in International Conference on Advanced Management Science (ICAMS09). Approved undetectable-antivirus steganography for multimedia information in PE-file (Singapore, 2009), pp. 437–441 H. Alanazi, H.A. Jalab, A.A. Zaidan, et al., New framework of hidden data with in non multimedia file. Int. J. Comput. Netw. Secur. 1, 46–53 (2010) A.W. Naji, A.A. Zaidan, B.B. Zaidan, Challenges of hidden data in the unused area two within executable files. J. Comput. Sci. 1, 890–896 (2009) A.A. Zaidan, B.B. Zaidan, A.W. Naji, et al., in International Conference on Information management and engineering (ICIME09). Securing cover-file of hidden data using statistical technique and AES encryption algorithm (Malaysia, 2009), pp. 35–40 A. Haveliya, A new approach for secret concealing in executable file. Int. J. Eng. Res. Appl. 2(2), 1672–1674 (2012) B.B. Zaidan, A.A. Zaidan, F. Othman, et al., in Proceeding of the International Conference on Cryptography, Coding and Information Security. Novel approach of hidden data in the unused area 1 within exe files using computation between cryptography and steganography (Paris, 2009), pp. 1–22 M.R. Islam, A.W. Naji, A.A. Zaidan, et al., New system for secure cover file of hidden data in the image page within executable file using statistical steganography techniques. Int. J. Comput. Sci. Inf. Secur. 7(1), 273–279 (2009) B.B. Zaidan, A.A. Zaidan, F. Othman, New technique of hidden data in PE-file with in unused area one. Int. J. Comput. Electrical Eng. (IJCEE) 1(5), 669–678 (2009) A.W. Naji, A.A. Zaidan, B.B. Zaidan, et al., New approach of hidden data in the portable executable file without change the size of carrier file using distortion techniques. Int. J. Comput. Sci. Netw. Secur. 9(7), 218–224 (2009) A.A. Zaidan, B.B. Zaidan, A.J. Hamid, A new system for hiding data within (unused area two + image page) of portable executable file using statistical technique and advance encryption standared. Int. J. Comput. Theory Eng. 10(5), 125–131 (2010) D. Shin, Y. Kim, K. Byun, et al., in Proceedings of the 6th Australian Digital Forensics Conference. Data hiding in windows executable files (Perth, 2008), pp. 1–8 L. Qian, F. Yong, D. Tan, Z. Changshan, Research on information hiding technology based on unlimited capacity of PE file. Comput. Appl. Res. 28(7), 2758–2760 (2011) W. Wei, K. Liu, X. Wan, High capacity information hiding based on PE file format. J. Nanjing Univ. Sci. Technol. 39(01), 45–49 (2015) Y. Li, X. Shi, Research on PE file information hiding technology. Netw. Secur. Technol. Appl. (11), 51–52 (2017) X. Xu, X. Xu, H. Liang, et al., Information hiding research and scheme implementation of PE file resource section. Comput. Appl. 27(3), 621–623 (2007) D. Qingfeng, Y. Wang, Z. Kaize, W. Xi, Information hiding scheme based on PE file resource data. Comput. Eng. 35(13), 128–130 (2009) Z. Tian, Y. Li, L. Yang, Research on PE file information hiding technology based on import table migration. Comput. Sci. 43(01), 207–210 (2016) J. Xu, J.F. Li, Y.L. Ye, et al., An information hiding algorithm based on bitmap resource of portable executable file. J. Electron. Sci. Technol., 181–184 (2012) D. Qingfeng, W. Yanbo, Z. Xiongwei, Z. Kaize, Spread spectrum software watermarking scheme based on the number of import function references. Comput. Res. Dev. 46(supply), 88–92 (2009) F. Long, J. Liu, X. Yuan, A software watermark for transforming the structure of PE file import table. Comput. Appl. 30(1), 217–219 (2010) A.P. Namanya, I.U. Awan, J.P. Disso, M. Younas, Similarity hash based scoring of portable executable files for efficient malware detection in IoT. Future Generation Computer Systems (2019) S.L. Shiva Darshan, C.D. Jaidhar, Performance evaluation of filter-based feature selection techniques in classifying portable executable files. Proc. Comput. Sci. 2018, 125 (2018) X. Wang, J. Jianming, Z. Shujing, B. Liang, A fair blind ignature scheme to revoke malicious vehicles in VANETs, computers. Mater. Continua 58(1), 249–262 (2019) J. Wang, H. Wang, J. Li, X. Luo, Y.-Q. Shi, S. Kr, Jha, Detecting double JPEG compressed color images with the same quantization matrix in spherical coordinates. IEEE Trans. CSVT (2019). https://doi.org/10.1109/TCSVT J. Wang, T. Li, X. Luo, Y.-Q. Shi, S. Jha, Identifying computer generated images based on quaternion central moments in color quaternion wavelet domain. IEEE Trans. CSVT 29(9), 2775–2785 (2018) K. Chen, Z. Liu, Current situation and progress on decompilation research. Comput. Sci. 28(5), 113–115 (2001) Thanks to the anonymous reviewers for their constructive suggestions to help improving this paper. This work is supported in part by the National Natural Science Foundation of China (61373132, 61872408), the Key Laboratory of informationization technology for basic education in Hunan province (2015TP1017), Hunan provincial higher education reform research project (2012[528]), Project of research study and innovative experiment for college students in Hunan Province(2017[873]). School of Information Science and Engineering, Hunan First Normal University, 410205, Changsha, China Zuwei Tian & Hengfu Yang Zuwei Tian Hengfu Yang Our contributions in this paper were that the first author (Zuwei Tian) participated in the designing of the scheme and drafted the manuscript. The second author (Hengfu Yang) carried out code design, the experiments and participated in designing of the scheme. All authors read and approved the final manuscript. Zuwei Tian received the B.E. degree in computer engineering from Xiangtan University, China, and the master's degree of computer science from National Defense Science and Technology University, China. He received the Ph.D. degree from Hunan University, China. He is a computer science professor of Hunan First Normal University, China. He leads a team of researchers and students in the areas of Information Security, such as information hiding. He has published more than 20 journals articles and his research has been funded by Natural Science Foundation Committee of China. Hengfu Yang received the B.E. degree in computer engineering from Xiangtan University, China, and the master's degree of computer science from GuiZhou University, China. He received the Ph.D. degree from Hunan University, China. He is a computer science professor of Hunan First Normal University, China. His research interests include information hiding, image processing, and multimedia security. Correspondence to Zuwei Tian. Tian, Z., Yang, H. Code fusion information-hiding algorithm based on PE file function migration. J Image Video Proc. 2021, 2 (2021). https://doi.org/10.1186/s13640-020-00541-3 Received: 24 April 2020 Accepted: 11 November 2020 Information hiding PE file Code fusion New Advances on Intelligent Multimedia Hiding and Forensics
CommonCrawl
A biharmonic equation in $\mathbb{R}^4$ involving nonlinearities with critical exponential growth CPAA Home Multiplicity solutions for fully nonlinear equation involving nonlinearity with zeros January 2013, 12(1): 429-449. doi: 10.3934/cpaa.2013.12.429 Existence and multiplicity of semiclassical states for a quasilinear Schrödinger equation in $\mathbb{R}^N$ Minbo Yang 1, and Yanheng Ding 2, Department of Mathematics, Zhejiang Normal University, Jinhua, 321004, China Institute of Mathematics, AMSS, Chinese Academy of Sciences, Beijing 100080 Received May 2011 Revised March 2012 Published September 2012 In this paper we consider the following modified version of nonlinear Schrödinger equation: $-\varepsilon^2\Delta u +V(x)u-\varepsilon^2\Delta (u^2)u=g(x,u) $ in $\mathbb{R}^N$, $N\geq 3$ and $g(x,u)$ is a superlinear but subcritical function. Applying variational methods we show the existence and multiplicity of solutions provided $\varepsilon$ is sufficiently small enough. Keywords: Quasilinear Schrödinger equation, Mountain Pass Theorem., semiclassical states. Mathematics Subject Classification: Primary: 35J20, 35J60, 35Q5. Citation: Minbo Yang, Yanheng Ding. Existence and multiplicity of semiclassical states for a quasilinear Schrödinger equation in $\mathbb{R}^N$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 429-449. doi: 10.3934/cpaa.2013.12.429 A. Ambrosetti, M. Badiale and S. Cingolani, Semiclassical states of nonlinear Schrödinger equations,, Arch. Rat. Mech. Anal., 140 (1997), 285. doi: 10.1007/s002050050067. Google Scholar M. Colin and L. Jeanjean, Solutions for a quasilinear Schrödinge equation: a dual approach,, Nonlinear Anal., 56 (2004), 213. doi: 10.1016/j.na.2003.09.008. Google Scholar Y. H. Ding and A. Szulkin, Bound states for semilinear Schrödinger equations with sign-changing potential,, Calc. Var. Partial Differential Equations, 29 (2007), 397. doi: 10.1007/s00526-006-0071-8. Google Scholar Y. H. Ding and F. H. Lin, Solutions of perturbed Schrödinger equations with critical nonlinearity,, Calc. Var. Partial Differential Equations, 30 (2007), 231. doi: 10.1007/s00526-007-0091-z. Google Scholar Y. H. Ding and J. C. Wei, Semiclassical states for nonlinear Schrödinger equations with sign-changing potentials,, J. Func. Anal., 251 (2007), 546. Google Scholar M. del Pino and P. Felmer, Multipeak bound states of nonlinear Schrödinger equations,, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eare, 15 (1998), 127. doi: 10.1016/S0294-1449(97)89296-7. Google Scholar M. del Pino and P. Felmer, Semi-classical states of nonlinear Schrödinger equations: a variational reduction method,, Math. Ann., 324 (2002), 1. doi: 10.1007/s002080200327. Google Scholar J. M. do Ó, O. Miyagaki and S. Soares, Soliton solutions for quasilinear Schrödinger equations: the critical exponential case,, Nonlinear Anal., 67 (2007), 3357. doi: 10.1016/j.na.2006.10.018. Google Scholar J. M. do Ó and U. Severo, Solitary waves for a class of quasilinear Schrödinger equations in dimension two,, Calc. Var. Partial Differential Equations, 38 (2010), 275. doi: 10.1007/s00526-009-0286-6. Google Scholar J. M. do Ó, A. Moameni and U. Severo, Semi-classical states for quasilinear Schrödinger equations arising in plasma physics},, Commun. Contemp. Math., 11 (2009), 547. doi: 10.1142/S021919970900348X. Google Scholar J. M. do Ó, E. Medeiros and U. Severo, On a quasilinear nonhomogeneous elliptic equation with critical growth in $R^N$,, J. Differential Equations, 246 (2009), 1363. Google Scholar J. M. do Ó, O. Miyagaki and S. Soares, Soliton solutions for quasilinear Schrödinger equations with critical growth,, J. Differential Equations, 248 (2010), 722. doi: 10.1016/j.jde.2009.11.030. Google Scholar X. Kang and J. Wei, On interacting bumps of semi-classical states of nonlinear Schrödinger equations ,, Adv. Diff. Eqs., 5 (2000), 899. Google Scholar A. Floer and A. Weinstein, Nonspreading wave pachets for the packets for the cubic Schrödinger with a bounded potential,, J. Funct. Anal., 69 (1986), 397. doi: 10.1016/0022-1236(86)90096-0. Google Scholar P. L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case, Part II,, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eare, 1 (1984), 223. Google Scholar J. Liu and Z. Q. Wang, Soliton solutions for quasilinear Schrödinger equations I,, Proc. Amer. Math. Soc. \textbf{131} (2002), 131 (2002), 441. doi: 10.1090/S0002-9939-02-06783-7. Google Scholar J. Liu, Y. Wang and Z. Q. Wang, Soliton solutions for quasilinear Schrödinger equations II,, J. Differential Equations, 187 (2003), 473. doi: 10.1016/S0022-0396(02)00064-5. Google Scholar J. Liu, Y. Wang and Z. Q. Wang, Solutions for quasilinear Schrödinger equations via the Nehari method,, Comm. Partial Differential Equations, 29 (2004), 879. doi: 10.1081/PDE-120037335. Google Scholar A. Moameni, Existence of soliton solutions for a quasilinear Schrödinger equation involving critical exponent in $R^N$,, J. Differential Equations, 229 (2006), 570. doi: 10.1016/j.jde.2006.07.001. Google Scholar Y. G. Oh, Existence of semiclassical bound states of nonlinear Schrödinger equations with potentials of the class $(V)_\alpha$,, Comm. Part. Diff. Eqs., 13 (1988), 1499. Google Scholar Y. G. Oh, On positive multi-lump bound states of nonlinear Schrödinger equations under multiple well potential,, Comm. Math. Phys., 131 (1990), 223. doi: 10.1007/BF02161413. Google Scholar M. Poppenberg, K. Schmitt and Z. Q. Wang, On the existence of soliton solutions to quasilinear Schrödinger equations,, Calc. Var. Partial Differential Equations, 14 (2002), 329. doi: 10.1007/s005260100105. Google Scholar P. Rabinowitz, On a class of nonlinear Schrödinger equations,, Z. Ang. Math. Phys., 43 (1992), 270. doi: 10.1007/BF00946631. Google Scholar M. Reed and B. Simon, "Methods of Modern Mathematical Physics, IV Analysis of Operators,", Academic Press, (1978). Google Scholar B. Sirakov, Standing wave solutions of the nonlinear Schrödinger equations in $R^N$,, Annali di Matematica, 183 (2002), 73. Google Scholar E. B. Silva and G. F. Vieira, Quasilinear asymptotically periodic Schrödinger equations with critical growth,, Calc. Var. Partial Differential Equations, 39 (2010), 1. doi: 10.1007/s00526-009-0299-1. Google Scholar X. Wang, On concentration of positive bound states of nonlinear Schrödinger equations,, Comm. Math. Phys., 153 (1993), 229. doi: 10.1007/BF02096642. Google Scholar Christopher Grumiau, Marco Squassina, Christophe Troestler. On the Mountain-Pass algorithm for the quasi-linear Schrödinger equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1345-1360. doi: 10.3934/dcdsb.2013.18.1345 Silvia Cingolani, Mónica Clapp. Symmetric semiclassical states to a magnetic nonlinear Schrödinger equation via equivariant Morse theory. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1263-1281. doi: 10.3934/cpaa.2010.9.1263 Harald Friedrich. Semiclassical and large quantum number limits of the Schrödinger equation. Conference Publications, 2003, 2003 (Special) : 288-294. doi: 10.3934/proc.2003.2003.288 Chang-Lin Xiang. Remarks on nondegeneracy of ground states for quasilinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5789-5800. doi: 10.3934/dcds.2016054 Dorota Bors. Application of Mountain Pass Theorem to superlinear equations with fractional Laplacian controlled by distributed parameters and boundary data. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 29-43. doi: 10.3934/dcdsb.2018003 Ian Schindler, Kyril Tintarev. Mountain pass solutions to semilinear problems with critical nonlinearity. Conference Publications, 2007, 2007 (Special) : 912-919. doi: 10.3934/proc.2007.2007.912 Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2395-2406. doi: 10.3934/cpaa.2014.13.2395 Alex H. Ardila. Stability of ground states for logarithmic Schrödinger equation with a $δ^{\prime}$-interaction. Evolution Equations & Control Theory, 2017, 6 (2) : 155-175. doi: 10.3934/eect.2017009 Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1309-1333. doi: 10.3934/cpaa.2016.15.1309 Kun Cheng, Yinbin Deng. Nodal solutions for a generalized quasilinear Schrödinger equation with critical exponents. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 77-103. doi: 10.3934/dcds.2017004 Jianqing Chen. A variational argument to finding global solutions of a quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (1) : 83-88. doi: 10.3934/cpaa.2008.7.83 Xiang-Dong Fang. A positive solution for an asymptotically cubic quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (1) : 51-64. doi: 10.3934/cpaa.2019004 Silvia Cingolani, Mnica Clapp, Simone Secchi. Intertwining semiclassical solutions to a Schrödinger-Newton system. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 891-908. doi: 10.3934/dcdss.2013.6.891 Rémi Carles, Christof Sparber. Semiclassical wave packet dynamics in Schrödinger equations with periodic potentials. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 759-774. doi: 10.3934/dcdsb.2012.17.759 Dmitry Glotov, P. J. McKenna. Numerical mountain pass solutions of Ginzburg-Landau type equations. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1345-1359. doi: 10.3934/cpaa.2008.7.1345 Claudianor O. Alves, Giovany M. Figueiredo, Marcelo F. Furtado. Multiplicity of solutions for elliptic systems via local Mountain Pass method. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1745-1758. doi: 10.3934/cpaa.2009.8.1745 Yinbin Deng, Yi Li, Xiujuan Yan. Nodal solutions for a quasilinear Schrödinger equation with critical nonlinearity and non-square diffusion. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2487-2508. doi: 10.3934/cpaa.2015.14.2487 Yaotian Shen, Youjun Wang. A class of generalized quasilinear Schrödinger equations. Communications on Pure & Applied Analysis, 2016, 15 (3) : 853-870. doi: 10.3934/cpaa.2016.15.853 GUANGBING LI. Positive solution for quasilinear Schrödinger equations with a parameter. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1803-1816. doi: 10.3934/cpaa.2015.14.1803 Hiroshi Isozaki, Hisashi Morioka. A Rellich type theorem for discrete Schrödinger operators. Inverse Problems & Imaging, 2014, 8 (2) : 475-489. doi: 10.3934/ipi.2014.8.475 Minbo Yang Yanheng Ding
CommonCrawl
Method of determining base values of traits in isolated populations Prelude: I came across a discussion about the correct formula for calculating the average IQ of offspring, which goes something like the following $$ 100 + \frac35 \left( \left(\text{father's IQ} + \frac{\text{mother's IQ}}{2} \right) - 100 \right) $$ but it does not matter for this question, and I do not know or particularly care if this is the correct formula. As you all know, there is regression to the mean, i.e., if both parents have a freakishly high IQ of, say, 160, but come both from a "base population" with average IQ 100, the formula calculates the average IQ of the offspring to be lower than the arithmetic mean, because the parents are both outliers of their own "base population". Now, the above formula supposedly takes regression to the mean into account (and the mean it takes into account is IQ 100). As some people then pointed out, there is assortative mating, and very smart people descend often from very smart people, so the formula does not work for couples of "good pedigree". I take it that this regression to the mean happens because these outliers with high IQ still carry some of the genes of the base population with IQ 100 and those are likely to be inherited by their children, thus lowering the IQ of offspring in comparison to their high-IQ parents. How many generations of ancestors of average IQ, say, 110, do you have to have, in order for the regression of the mean to go towards IQ 110? More generally and more interestingly, how do you determine what is the average value of a trait of a reasonably isolated subpopulation, that stems from and still lives inside a larger population? I am specifically interested in subpopulations as small as families, families with mean values of traits that deviate significantly from that of the overall population. Obviously, the first family-generation with a mean IQ of 110 does not make it likely that the mean of the offspring will be 110 as well if the direct ancestors, i.e., the zeroth generations and those before average around 100. At what stage or how many generations in can we reasonably assume that the IQ 110 is the value that the mean regresses to, under the assumption that there is constant assortative mating regarding intelligence and no freakish outliers occur in the family tree? I guess it boils down to the following. How big do the two family trees of a couple have to be in order to reliably infer the average value of a trait — like intelligence, highly heritable, polygenous — in future offspring? (No eugenics, no one gets selected other than by their mates, no evil scientists lurking.) genetics human-genetics population-genetics theoretical-biology sexual-selection Rodrigo de Azevedo MaximilianMaximilian $\begingroup$ Consider a couple where the male has an IQ of 125 and the female has an IQ of 100. What average IQ does the formula give? $\endgroup$ – Rodrigo de Azevedo I put in an answer to this question, with some reservations noted below, as there is a somewhat straightforward genetics answer to at least part of it. I'll leave the answer up for a little while as a sort of placeholder, up to the point that the question hopefully gets taken down. Original answer I'll first note the strong "yikes!" aspect of this question, which is closely related to e.g. eugenics. However, I believe that you will be most interested in the "breeder's equation", which is I believe the source of some of your values: $\Delta Z = h^2S$ $\Delta Z$ = change in mean trait value in a population per unit time (generation usually) $h^2$ = narrow-sense heritability $S$ = strength of selection. I suggest going to that resource for more information. I will quickly note that this model is strongly dependent on a number of assumptions: You have accurately estimated the narrow-sense heritability. You have a large, panmictic population (you have raised this with assortative mating). a lot of other stuff (1) in particular is a big deal for IQ or really nearly any human, where there are reported heritability estimates in the 50% range from twin studies, but we honestly don't have a good estimate of this stuff, and there is really strong confounding from environmental variation (contra evo psych). I would not expect ("normal") IQ variation to respond very strongly to selection if environments were held constant. (There are obviously very clear examples of systematic environmental effects on IQ). When I say "normal" variation, I mean among people who do not have cognitive disabilities related to specific variants of large effect (such as abnormal karyotypes). If you include those, you will get a large initial response to selection due to selecting on that unambiguous variation. Why not just measure the trait of interest? AliceD♦ Maximilian PressMaximilian Press Not the answer you're looking for? Browse other questions tagged genetics human-genetics population-genetics theoretical-biology sexual-selection or ask your own question. Why does the slope of parent-offspring regression equals the heritability in the narrow sense? Initial population when i count backwards? Current Trend in Evolution of Human Intelligence How are the dominance and additive effects reflected in the mean and variance of the offsprings of two individuals with known phenotype? Summary statistic that can distinguish between migration from pop2 to pop1 vs. pop1 to pop2 How to predict future generations from heritability What is the probability of an offspring sharing identical HLA typing as one of their parents?
CommonCrawl
Mathematics > Rings and Algebras [Submitted on 1 Feb 2016 (v1), last revised 6 May 2021 (this version, v6)] Title:Algebras with a negation map Authors:Louis Halle Rowen Abstract: Our objective in this project is three-fold, the first two covered in this paper. In tropical mathematics, as well as other mathematical theories involving semirings, when trying to formulate the tropical versions of classical algebraic concepts for which the negative is a crucial ingredient, such as determinants, Grassmann algebras, Lie algebras, Lie superalgebras, and Poisson algebras, one often is challenged by the lack of negation. Following an idea originating in work of Gaubert and the Max-Plus group and brought to fruition by Akian, Gaubert, and Guterman, we study algebraic structures with negation maps, called \textbf{systems}, in the context of universal algebra, showing how these unify the more viable (super)tropical versions, as well as hypergroup theory and fuzzy rings, thereby "explaining" similarities in their theories. Special attention is paid to \textbf{meta-tangible} $\mathcal T$-systems, whose algebraic theory includes all the main tropical examples and many others, but is rich enough to facilitate computations and provide a host of structural results. Basic results also are obtained in linear algebra, linking determinants to linear independence. Formulating the structure categorically enables us to view the tropicalization functor as a morphism, thereby further explaining the mysterious link between classical algebraic results and their tropical analogs, as well as with hyperfields. We utilize the tropicalization functor to propose tropical analogs of classical algebraic notions. The systems studied here might be called "fundamental," since they are the underlying structure which can be studied via other "module" systems, which is to be the third stage of this project, involving a theory of sheaves and schemes and derived categories with a negation map. Comments: Reduced to 50 pages from original version of 75 pages. Refinement of previous versions, with the emphasis on systems with applications to tropical structures, hyperfields, and fuzzy rings. The introduction has been reworked, and a list of topics for further study Subjects: Rings and Algebras (math.RA); Commutative Algebra (math.AC); Algebraic Geometry (math.AG) MSC classes: 16Y60, 06F05, 13C10, 13C60, 18E10 (Primary), 12K10, 14T05 (Secondary) Cite as: arXiv:1602.00353 [math.RA] (or arXiv:1602.00353v6 [math.RA] for this version) From: Louis Rowen [view email] [v1] Mon, 1 Feb 2016 00:49:10 UTC (63 KB) [v2] Mon, 11 Apr 2016 21:24:56 UTC (85 KB) [v3] Tue, 11 Oct 2016 09:03:15 UTC (139 KB) [v4] Sun, 26 Feb 2017 20:37:17 UTC (156 KB) [v5] Fri, 11 May 2018 15:05:04 UTC (157 KB) [v6] Thu, 6 May 2021 10:35:41 UTC (137 KB) math.RA math.AC math.AG
CommonCrawl
Ricerca - Laboratori - Formazione di dipartimento Seminario Matematico e Fisico di Milano Corso di studi in Ingegneria Matematica Dottorato di Ricerca Modelli e Metodi Matematici per l'Ingegneria Dottorato di Ricerca DADS (Data Analytics and Decision Sciences) Corsi di Formazione Avanzata M3I PhD and Post Doc's guide A DADS PhD and Collaborator's guide MATH & BOOKS CONTATTI & PEC Direttore: Prof. Giulio Magli Direttore Vicario: Prof. Gabriele Grillo Responsabile Gestionale: Dr.ssa Franca Di Censo Evolution equations driven by dissipative operators in Wasserstein spaces Giulia Cavagnari, Politecnico di Milano mercoledì 20 gennaio 2021 alle ore 11:15 In this talk we present new results framing into the recent theory of Measure Differential Equations introduced by B. Piccoli (Rutgers University-Camden). The state space where these evolution equations are set is the Wasserstein space of probability measures, hence tools of Optimal Transport are essential. The key point here is that the vector field itself maps into the space of probability measures lying on the tangent bundle, in a way compatible with the projection on the state space. We give a stronger definition of solution which indeed "selects" only one of the (not unique) solutions in the sense of Piccoli. In addition to uniqueness, we are also able to prove stability results. To do so, we borrow ideas from the theory of evolution equations driven by dissipative operators on Hilbert spaces, giving a notion of solution in terms of a so called Evolution Variational Inequality. This is a joint work with G. Savaré (Bocconi University) and G. E. Sodini (TUM-IAS). The spectral theorem for a normal operator on a Clifford module David P. Kimsey, Newcastle University In this talk we will consider the problem of obtaining a spectral resolution for a densely defined closed normal operator on a Clifford module $\mathcal{H}_n := \mathcal{H} \otimes \mathbb{R}_n$, where $\mathcal{H}$ is a real Hilbert space and $\mathbb{R}_n := \mathbb{R}_{0, n}$ is the Clifford algebra generated by the units $e_1, \ldots, e_n$ with $e_i e_j = -e_j e_i$ for $i \neq j$ and $e_j^2 = -1$ for $j=1,\ldots, n$. We shall see that any densely defined closed normal operator on a Clifford module admits an integral representation which is analogous to the integral representation for a densely defined closed normal operator on a quaternionic Hilbert space (which one may think of as a Clifford module $\mathcal{H}_2$) discovered by Daniel Alpay, Fabrizio Colombo and the speaker in 2014. However, the Clifford module setting sketched above with $n > 2$ presents a number of technical difficulties which are not present in the quaternionic Hilbert space case. In order to prove this result, one needs to slightly generalise the notion of $S$-spectrum to allow for operators which are not necessarily paravector operators, i.e., operators of the form $T =T_0 + \sum_{j=1}^n T_j e_j$. This observation has implications on a generalisation of the $S$-functional calculus and some related function theory which we shall briefly highlight. The main thrust of this talk is based on joint work with Fabrizio Colombo. The work on the $S$-functional calculus is joint work with Fabrizio Colombo, Jonathan Gantner and Irene Sabadini. The work on the related function theory is joint work with Fabrizio Colombo, Irene Sabadini and Stefano Pinton. MOX Colloquia Patient-specific hemodynamics simulations for interventional planning of congenital and acquired cardiac diseases Irene Vignon-Clementel, Inria - France giovedì 21 gennaio 2021 alle ore 14:00 Online Seminar: mox.polimi.it/elenco-seminari/?id_evento=2011&t=763724 Irene Vignon-Clementel Irene Vignon-Clementel is directrice de recherche (prof. equiv.) at Inria, the French National Institute for Research in Digital Science and Technology. She holds a 'habilitation' degree in Applied Mathematics (Sorbonne U., formerly U. Pierre & Marie Curie) and a PhD in Mechanical Engineering (Stanford U.). Her research focuses on modeling and numerical simulations of physiological flows to better understand a number of pathophysiologies and their treatment (surgical planning, medical device design), especially related to blood circulation and breathing. This requires developing models of different complexities, coupling them, that their numerical implementation is robust, and that their parameters are based on medical or experimental data specific to a subject. Applications include congenital and acquired cardiovascular diseases, respiratory diseases and liver pathophysiology, and more recently the interpretation of non-invasive dynamic imaging. Irene VC is member of several conference committees, of the Int. J. Num. Methods Biomed. Eng. editorial board, of the VPHi board, of the scientific advisory committee for the 3DS-FDA ENRICHMENT Project, and was co-chair of the international conference VPH2020. She received the top recipient award of the western states American Heart Association fellowship (2004-2006), the student award at the World Congress of Computational Mechanics by the USACM and the USACM Executive Committee (2006), Inria excellence awards (2012 and 2016), and has been awarded an ERC consolidator grant (2019). She has been working with companies and clinicians as a PI in a number of national and international grants such as a Leducq transatlantic network of excellence, and actively promoting the computational bioengineering and medicine interface through co-supervision of MD-PhDs, joint research projects, conference organization and interface articles with clinicians. Hemodynamics modeling has become mature enough to simulate local fluid dynamics changes due to a surgery or device implantation. However taking into account their interactions with the rest of the circulation, or even the downstream vascular bed not accessible by imaging remains challenging. We will present multi-fidelity models and computational methods that have been developed to tackle this issue for patient-specific image-based modeling. We will demonstrate through examples of congenital heart disease and coronary vascular disease how such simulations can be performed by including morphological and functional data. Finally we will discuss the advantage of combining these simulations with supervised machine learning as a tool to predict abdominal aneurysm growth risk and palliate the lack of mechanistic growth equations. Contatto: [email protected] Symmetric solutions to supercritical elliptic problems Benedetta Noris, Politecnico di Milano martedì 26 gennaio 2021 alle ore 11:15 When searching for solutions to Sobolev-supercritical elliptic problems, a major difficulty is the lack of Sobolev embeddings, that entrains a lack of compactness. In this talk, I will discuss how symmetry and monotonicity properties can help to overcome this obstacle. In particular, I will present a recent result concerning the existence of axially symmetric solutions to a semilinear equation, in collaboration with A. Boscaggin, F. Colasuonno and T. Weth. Dalla dinamica delle popolazioni all'epidemiologia: alcuni semplici modelli differenziali Gianmaria Verzini , Politecnico di Milano Lo scopo di questo seminario è di introdurre alcuni semplici modelli matematici nell'ambito della dinamica delle popolazioni. Dopo aver considerato i primi modelli per una o più popolazioni, dovuti a Verhulst, Lotka e Volterra, ci soffermeremo in particolare su uno dei modelli di base per lo studio della propagazione delle epidemie: il modello SIR di Kermack e McKendrick. MOX Seminar Nowcasting the Italian epidemic outbreak of SARS-CoV-2 Alessio Farcomeni, Università Roma Tor Vergata giovedì 28 gennaio 2021 alle ore 14:00 precise In the talk, we briefly discuss the main epidemiological features of SARS-CoV-2 one year into the pandemic, giving also a short account of the public data available for Italy and of the main limits of lay analyses. We then discuss an accurate method for short-term forecasting ICU occupancy at local level. Our approach is based on an optimal ensemble of two simple methods: a generalized linear mixed regression model which pools information over different areas, and an area-specific non-stationary integer autoregressive methodology. Optimal weights are estimated using a leave-last-out rationale. Daily predictions between February 24th and November, 27th 2020 have a median error of 3 beds (third quartile: 8) at regional level, with coverage of 99% prediction intervals that exceeds the nominal one. Finally we present a different method based on a modified non-linear GLM for each indicator, including the potential effect of exogenous variables, based on appropriate distributional assumptions and a logistic-type growth curve. This allows us to accurately predict important characteristics of the epidemic (e.g., peak time and height). Based on joint works with Pierfrancesco Alaimo di Loro, Fabio Divino, Giovanna Jona Lasinio, Gianfranco Lovison, Antonello Maruotti, Marco Mingione Mathematics vs Dementia Alain Goriely, University of Oxford lunedì 1 febbraio 2021 alle ore 11:45 Neurodegenerative diseases such as Alzheimer's or Parkinson's are devastating conditions with poorly understood mechanisms and no cure. Yet, a striking feature of these conditions is the characteristic pattern of invasion throughout the brain, leading to well-codified disease stages associated with various cognitive deficits and pathologies. How can we use mathematics to gain insight into this process and, doing so, gain understanding about how the brain works? In this talk, I will show that by linking new mathematical theories to recent progress in imaging, we can unravel some of the universal features associated with dementia and, more generally, brain functions. JMGT [Jordan-Moore Gibson-Thompson] dynamics arising in non- linear acoustics - a view from the boundary Irena Lasiecka, University of Memphis A third-order (in time) JMGT equation is a nonlinear (quasi-linear) Partial Differential Equation (PDE) model introduced to describe a non-linear propagation of high frequency acoustic waves. The interest in studying this type of problems is motivated by a large array of applications arising in engineering and medical sciences-including high intensity focused ultrasound [HIFU] technologies, lithotripsy, welding and others. The important feature is that the model avoids the infinite speed of propagation paradox associated with a classical second order in time equation referred to as Westervelt equation. Replacing a classical heat transfer by heat waves gives rise to the third order in time derivative scaled by a small parameter $\tau > 0$, the latter represents the thermal relaxation time parameter and is intrinsic to the properties of the medium where the dynamics occurs. The aim of the present lecture is to provide a brief overview of recent results in the area which are pertinent to both linear and non-linear dynamics. From the mathematical point of view JMGT, can be seen as a nonlinear perturbation of a third order strictly hyperbolic system, which however has a characteristic boundary. This feature has, of course, strong implications on boundary behavior [both regularity and controllability] which can not be patterned after classical hyperbolic systems theory [as it is the case for the wave equation]. As a consequence, the analysis of regularity [both forward and inverse estimates] is particularly challenging-even in the linear case. Several recent results pertaining to boundary stabilization, optimal control and asymptotic analysis of the solutions with vanishing time relaxation parameter will be presented and discussed. In all these case, peculiar features associated with the third order dynamics leads to novel phenomenological behaviors. Counting minimal surfaces in negatively curved manifolds Andrè Neves, University of Chigaco After presenting some of the recent progress on existence of minimal surfaces, I will talk about my recent work with Calegari and Marques where we introduce a quantity that counts some minimal surfaces in negatively curved manifolds and which is minimized by the hyperbolic metric When can solutions of polynomial equations be algebraically parametrized? Olivier Debarre, Sorbonne Université - Université de Paris The description of all the solutions of the equation $x^2+y^2=z^2$ in integral numbers (a.k.a. Pythagorean triples) is a very ancient problem: a Babylonian clay tablet from about 1800BC may contain some solutions, Pythagoras (about 500BC) seems to have known one infinite family of solutions, and so did Plato... This gives a first example of a rational variety: the rational points on the circle with equation $x^2+y^2=1$ can be algebraically parametrized by one rational parameter. More generally, one says that a variety, defined by a system of polynomial equations, is rational if its points (the solutions of the system) can be algebraically parametrized, in a one-to-one fashion, by independent parameters. I will begin with easy standard examples, then explain and apply some (not-so-recent) techniques that can be used to prove that some varieties (such as the set of rational solutions of the equation $x^3+y^3+z^3+t^3=1$) are not rational. Minimal time optimal control for the moon lander problem Elsa Marchini, Politecnico di Milano martedì 2 febbraio 2021 alle ore 11:15 We study a variant of the classical safe landing optimal control problem in aerospace, introduced by Miele in the Sixties, where the target was to land a spacecraft on the moon by minimizing the consumption of fuel. Assuming that the spacecraft has a failure and that the thrust (representing the control) can act in both vertical directions, the new target becomes to land safely by minimizing time, no matter of what the consumption is. In dependence of the initial data (height, velocity, and fuel), we prove that the optimal control can be of four different kinds, all being piecewise constant. Our analysis covers all possible situations, including the nonexistence of a safe landing strategy due to the lack of fuel or for heights/velocities for which also a total braking is insufficient to stop the spacecraft. This talk is based on a joint work with Filippo Gazzola New results on the Lieb-Thirring inequality Mathieu Lewin, CEREMADE, Université Paris Dauphine, Paris The Lieb-Thirring inequality is a generalization of the Gagliardo-Nirenberg inequality, which plays a central role in the analysis of large quantum systems. In this talk I will first introduce the inequality and then focus the discussion on its best constant. After reviewing what is believed, what is known and what is open, I will present new results on the value of the best constant as well as numerical simulations in 1D and 2D. Collaboration with Rupert L. Frank (Caltech, USA) and David Gontier (Paris-Dauphine, France). Digital Storytelling e narrazione matematica: costruire competenze matematiche online Giovannina Albano, Università degli Studi di Salerno mercoledì 10 febbraio 2021 alle ore 15:00 Il seminario presenta la progettazione e realizzazione di attività di insegnamento/apprendimento della matematica in ambienti digitali online. La metodologia soggiacente, sviluppata nell'ambito di un Progetto di Ricerca di Interesse Nazionale, sfrutta una duplice metafora di narrazione e un'opportuna organizzazione didattico-tecnologica, come elementi abilitanti lo sviluppo di competenze matematiche. Il progetto, nato per integrare e sfruttare le potenzialità dell'ambiente digitale nel contesto della didattica in presenza, assume particolare significatività nel nuovo contesto di didattica a distanza in cui la scuola si trova, a causa dell'attuale pandemia. Some global results for homogeneous Hormander sums of squares Stefano Biagi, Politecnico di Milano In this talk we present several global results concerning the class of the homogeneous Hörmander sums of squares. As the name suggests, the operators falling in this class are sums of squares of smooth vector fields which are homogeneous of degree 1 with respect to a family of non-isotropic diagonal maps (usually called dilations); moreover, these operators intervene in several contexts of interest (Lie group Theory, sub-Riemannian manifolds, Mathematical Finance, etc.). After a brief introduction on general sub-elliptic operators (of which any homogeneous sum of squares is a particular case), we properly introduce the class of the homogeneous Hörmander sums of squares and we discuss some global qualitative aspects regarding these operators: global lifting on Carnot groups; existence/global estimates for the associated fundamental solution and heat kernel; maximum principles on unbounded domains. The results presented in this talk are contained in several papers in collaboration with A. Bonfiglioli, M. Bramanti and E. Lanconelli. Symmetry and rigidity for composite membranes and plates Eugenio Vecchi, Politecnico di Milano martedì 16 febbraio 2021 alle ore 11:15 The composite membrane problem is an eigenvalue optimization problem that can be formulated as follows: Build a body of prescribed shape out of given materials (of varying densities) in such a way that the body has a prescribed mass and so that the basic frequency of the resulting membrane (with fixed boundary) is as small as possible. In the first part of the talk we will review the known results and present a Faber-Krahn-type result obtained in collaboration with G. Cupini (Università di Bologna). A natural extension of the above problem to the case of plates is the composite plate problem, which is an eigenvalue optimization problem involving the bilaplacian operator. The Euler-Lagrange equation associated to it is a fourth-order PDE that is coupled with Navier boundary conditions (for the hinged plate). In the second part of the talk we will focus on symmetry properties of optimal pairs. These results have been obtained in collaboration with F. Colasuonno (Università di Bologna). The Dunkl intertwining operator Hendrik De Bie, Ghent University There are two crucial operators in the theory of Dunkl harmonic analysis. The first is the Dunkl transform, which generalizes the Fourier transform. The second is the intertwining operator, which maps ordinary partial derivatives to Dunkl operators. Although some abstract statements are known about the intertwining operator, the explicit formula for classes of reflection groups is generally not known. In recent work Yuan Xu proposed a formula in the case of dihedral groups and a restricted class of functions. We extend his formula to all functions and give a general strategy on how to obtain similar formulas for other reflections groups. This is based on joint work with Pan Lian, available under arXiv:2002.09065 and to appear in J. Funct. Anal. Modeling Dementia Ellen Kuhl, Stanford University giovedì 18 febbraio 2021 alle ore 14:00 precise Ellen Kuhl Ellen Kuhl is the Robert Bosch Chair of Mechanical Engineering at Stanford University. She is a Professor of Mechanical Engineering and, by courtesy, Bioengineering. She received her PhD from the University of Stuttgart in 2000 and her Habilitation from the University of Kaiserslautern in 2004. Her area of expertise is Living Matter Physics, the design of theoretical and computational models to simulate and predict the behavior of living structures. Ellen has published more than 200 peer-reviewed journal articles and edited two books; she is an active reviewer for more than 20 journals at the interface of engineering and medicine and an editorial board member of seven international journals in her field. Ellen is the current Chair of the US National Committee on Biomechanics and a Member-Elect of the World Council of Biomechanics. She is a Fellow of the American Society of Mechanical Engineers and of the American Institute for Mechanical and Biological Engineering. She received the National Science Foundation Career Award in 2010, was selected as Midwest Mechanics Seminar Speaker in 2014, and received the Humboldt Research Award in 2016. Ellen is an All American triathlete on the Wattie Ink. Elite Team, a multiple Boston, Chicago, and New York marathon runner, and a Kona Ironman World Championship finisher. Neurodegeneration will undoubtedly become a major challenge in medicine and public health caused by demographic changes worldwide. More than 45 million people are living with dementia today and this number is expected to triple by 2050. Recent studies have reinforced the hypothesis that the prion paradigm, the templated growth and spreading of misfolded proteins, could help explain the progression of a variety of neurodegenerative disorders. However, our current understanding of prion-like growth and spreading is rather empirical. Here we show that a physics-based reaction-diffusion model can explain the growth and spreading of misfolded protein in a variety of neurodegenerative disorders. We combine the classical Fisher-Kolmogorov equation for population dynamics with anisotropic diffusion and simulate misfolding across representative sections of the human brain and across the brain as a whole. Our model correctly predicts amyloid-beta deposits and tau inclusions in Alzheimer's disease, alpha-synuclein inclusions in Parkinson's disease, and TDP-43 inclusions in amyotrophic lateral sclerosis. To reduce the computational complexity, we represent the brain through a connectivity-weighted Laplacian graph created from 418 brains of the Human Connectome Project. Our brain network model correctly predicts the key characteristic features of whole brain models at a fraction of their computational cost. Our results suggest that misfolded proteins in various neurodegenerative disorders grow and spread according to a universal law that follows the basic physical principles of nonlinear reaction and anisotropic diffusion. Our simulations can have important clinical implications, ranging from estimating the socioeconomic burden of neurodegeneration to designing clinical trials and pharmacological intervention. Contatto: [email protected] Seminari Passati Selezionare una sezione Tutte Algebra e Informatica Teorica Analisi Analisi Numerica Calcolo delle variazioni Dipartimento FDS Finanza Quantitativa Fisica Matematica Geometria Lezioni Leonardesche Matematica Discreta MOX Probabilità Quantistica Probabilità e Statistica Matematica Seminario Matematico e Fisico Seminari di Cultura Matematica Tomografia e Applicazioni Buchi neri, gravità e termodinamica Francesco Belgiorno , Politecnico di Milano Affronteremo alcuni aspetti della fisica dei campi gravitazionali estremi e dei buchi neri, in connessione anche con le recenti scoperte e con i lavori di Roger Penrose. Faremo poi cenno alla termodinamica dei buchi neri. Some recent stability results for beams with intermediate piers Maurizio Garrione, Politecnico di Milano Link Zoom: polimi-it.zoom.us/j/84708758746?pwd=aFpGY09hSzZnNDg3alk4bnhTNDN6UT09 We deal with nonlinear fourth-order evolution equations describing the dynamics of beams with one or more intermediate piers. We study the role of the geometry of the structure (that is, of the position of the piers), as well as the effect of a nonhomogeneous density, in the (linear) stability of bi-modal solutions. The analysis gives some evidence that both the asymmetry and the nonhomogeneity reinforce the structure Cryptocurrencies: the future of the money? Vincenzo Vespri, Università di Firenze giovedì 17 dicembre 2020 alle ore 14:00 In this talk I will speak about bitcoins and cryptocurrencies, how they work and their perspectives. I will consider the possible applications in business and IOT. Scienza, potere e cittadinanza Giovanni Boniolo , Università di Ferrara mercoledì 16 dicembre 2020 alle ore 15:00 Chi decide e chi legittima campagne di sanità pubblica (vaccinazioni, quarantene, screening genetici)? Partendo dall'esame di casi reali - sia italiani sia internazionali - si discuterà come il sapere scientifico, il dovere dello stato di proteggere i cittadini e una vera partecipazione pubblica possano interrelarsi fruttuosamente e senza alcuna caduta liberticida o deriva arrogante Finite jet determination for CR mappings Alexander Tumanov, University of Illinois at Urbana-Champaign Dipartimento di Matematica, Politecnico di Milano (on line) A CR mapping is a diffeomorphism between two real manifolds in complex space that satisfies tangential Cauchy-Riemann equations. We are concerned with the problem whether a CR mapping is uniquely determined by its finite jet at a point. This problem has been popular since 1970-s and the number of publications on the matter is enormous. Nevertheles, natural fundamental questions have been open. I will present a solution to a version of the problem and discuss old and recent results. Non-renormalization of the 'chiral anomaly' in interacting lattice Weyl semimetals Alessandro Giuliani, Roma Tre University & Centro Linceo Interdisciplinare B. Segre lunedì 14 dicembre 2020 alle ore 15:00 Weyl semimetals are 3D condensed matter systems characterized by a degenerate Fermi surface, consisting of a pair of `Weyl nodes'. Correspondingly, in the infrared limit, these systems behave effectively as Weyl fermions in 3+1 dimensions. We consider a class of interacting 3D lattice models for Weyl semimetals and prove that the quadratic response of the quasi-particle flow between the Weyl nodes, which is the condensed matter analogue of the chiral anomaly in QED4, is universal, that is, independent of the interaction strength and form. Universality, which is the counterpart of the Adler-Bardeen non-renormalization property of the chiral anomaly for the infrared emergent description, is proved to hold at a non-perturbative level, notwithstanding the presence of a lattice (in contrast with the original Adler-Bardeen theorem, which is perturbative and requires relativistic invariance to hold). The proof relies on constructive bounds for the Euclidean ground state correlation functions combined with lattice Ward Identities, and it is valid arbitrarily close to the critical point where the Weyl points merge and the relativistic description breaks down. Joint work with V. Mastropietro and M. Porta. Turnpike control and deep learning Enrique Zuazua, University of Erlangen-Nuremberg giovedì 10 dicembre 2020 alle ore 14:00 precise Enrique Zuazua Enrique Zuazua Iriondo (Eibar, Basque Country - Spain, 1961) dual PhD in Mathematics - University of the Basque Country & Université Pierre et Marie Curie, holds a Chair in Applied Analysis - Alexander von Humboldt Professorship at FAU- Friedrich-Alexander University, Erlangen-Nürnberg (Germany). He leads the research project "DyCon: Dynamic Control", funded by the ERC - European Research Council at Deusto Foundation, University of Deusto - Bilbao (Basque Country, Spain) and the Department of Mathematics, at UAM - Autonomous University of Madrid where he holds secondary appoints as Professor of Applied Mathematics (UAM) and Director of CCM - Chair of Computational Mathematics (Deusto). His fields of expertise in the area of Applied Mathematics cover topics related with Partial Differential Equations, Systems Control and Machine Learning, led to some fruitful collaboration in different industrial sectors such as the optimal shape design in aeronautics and the management of electrical and water distribution networks. With an important high impact on his work (h-index = 41), he has mentored a significant number of postdoctoral researchers and coached a wide network of Science managers. He holds a degree in Mathematics from the University of the Basque Country, and a dual PhD degree from the same university (1987) and the Université Pierre et Marie Curie, Paris (1988). In 1990 he became Professor of Applied Mathematics at the Complutense University of Madrid, to later move to UAM in 2001. He has been awarded the Euskadi (Basque Country) Prize for Science and Technology 2006 and the Spanish National Julio Rey Pastor Prize 2007 in Mathematics and Information and Communication Technology and the Advanced Grants by the European Research Council (ERC) NUMERIWAVES in 2010 and DyCon in 2016. He is an Honorary member of the of Academia Europaea and Jakiunde, the Basque Academy of Sciences, Letters and Humanities, Doctor Honoris Causa from the Université de Lorraine in France and Ambassador of the Friedrisch-Alexandre University in Erlangen-Nurenberg, Germany. He was an invited speaker at ICM2006 in the section on Control and Optimization. From 1999-2002 he was the first Scientific Manager of the Panel for Mathematics within the Spanish National Research Plan and from 2008-2012 he was the Founding Scientific Director of the BCAM - Basque Center for Applied Mathematics. He is also a member of the Scientific Council if a number of international research institutions such as the CERFACS in Toulouse, France and member of the Editorial Board in some of the leading journals in Applied Mathematics and Control Theory The tunrpike principle, ubiquitous in applications, asserts that in long time horizons optimal control strategies are nearly of a steady state nature. In this lecture we shall survey on some recent results on this topic and present some its consequences on deep supervised learning, and, in particular, in Residual Neural Networks. This lecture will be based in particular on recent joint work with C: Esteve, B. Geshkovski and D. Pighin. [1] Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany [2] Fundación Deusto, Bilbao, Basque Country Spain [3] Universidad Autónoma de Madrid, Spain Deep Learning modeling of Limit Order Book: a comparative perspective J. D. Turiel, UCL-ICL, Barclays Investment Bank mercoledì 9 dicembre 2020 alle ore 17:30 precise polimi-it.zoom.us/j/82640843401 We address theoretical and practical questions in the domain of Deep Learning for High Frequency Trading. State-of-the-art models such as Random models, Logistic Regressions, LSTMs, LSTMs equipped with an Attention mask, CNN-LSTMs and MLPs are reviewed and compared on the same tasks, feature space, and dataset and clustered according to pairwise similarity and performance metrics. The underlying dimensions of the modelling techniques are hence investigated to understand whether these are intrinsic to the Limit Order Book's dynamics. We observe that the Multilayer Perceptron performs comparably to or better than state-of-the-art CNN-LSTM architectures indicating that dynamic spatial and temporal dimensions are a good approximation of the LOB's dynamics, but not necessarily the true underlying dimensions. Seminari Matematici Cultura Matematica Seminari FDS Geometry-Algebra In Applications Probabilità e Statistica Matematica Probabilità Quantistica Seminario Matematico e Fisico di Milano Dipartimento di Elettronica e Informazione Matematica applicata (in Via Saldini, 50) Università di Milano Bicocca Università di Pavia RISM - Riemann International School of Mathematics PRIVACY E SICUREZZA VISITA FOTOGRAFICA ARCHIVIO NEWS INTRANET MAPPA DEL SITO
CommonCrawl
A population-based analysis of increasing rates of suicide mortality in Japan and South Korea, 1985–2010 Sun Y. Jeon1, Eric N. Reither1 & Ryan K. Masters2 In the past two decades, rates of suicide mortality have declined among most OECD member states. Two notable exceptions are Japan and South Korea, where suicide mortality has increased by 20 % and 280 %, respectively. Population and suicide mortality data were collected through national statistics organizations in Japan and South Korea for the period 1985 to 2010. Age, period of observation, and birth cohort membership were divided into five-year increments. We fitted a series of intrinsic estimator age-period-cohort models to estimate the effects of age-related processes, secular changes, and birth cohort dynamics on the rising rates of suicide mortality in the two neighboring countries. In Japan, elevated suicide rates are primarily driven by period effects, initiated during the Asian financial crisis of the late 1990s. In South Korea, multiple factors appear to be responsible for the stark increase in suicide mortality, including recent secular changes, elevated suicide risks at older ages in the context of an aging society, and strong cohort effects for those born between the Great Depression and the aftermath of the Korean War. In spite of cultural, demographic and geographic similarities in Japan and South Korea, the underlying causes of increased suicide mortality differ across these societies—suggesting that public health responses should be tailored to fit each country's unique situation. In recent decades, rates of suicide mortality have steadily declined among most member states of the Organization for Economic Cooperation and Development [1]. Unfortunately, Japan and South Korea are notable exceptions to this overall trend. Rates of suicide mortality in Japan spiked in 1998 and remained high thereafter (Fig. 1). In South Korea, the rate of suicide mortality has climbed since 1985, reaching over 30 per 100,000 person-years lived (PYL) in 2010. As a result of these trends, South Korea and Japan now exhibit the two highest rates of suicide mortality among all OECD countries. Age-standardized suicide mortality rates in Japan, South Korea and all OECD nations (averaged), 1990–2012 (Source: OECD Factbook. Rates are age-standardized to the 2012 OECD population to account for differences in age structures across countries and over time [1].) Prominent social scientists have argued that elevated suicide rates stem from disturbances to the social equilibrium, such as economic recession or rapid industrial expansion [2–5]. A small group of studies has evaluated the impact of specific social changes on increasing suicide rates in Japan and South Korea. These studies have detected associations between suicide and divorce rates [6], changing patterns of marriage [7] and the Asian economic crisis that impacted both Japan and South Korea in the late 1990s [8]. Another body of scholarship has attempted to isolate the effects of sweeping social changes (i.e., secular trends or period effects) on suicide mortality from factors related to population aging and birth cohort membership. Understanding the relative impact of age, period, and cohort effects is important because it can illuminate the mechanisms responsible for changes in a population's suicide rates. For instance, Odagiri et al. [9] suggest that strong age effects among middle-aged men in Japan are primarily responsible for increasing suicide rates—not secular changes such as the economic recession. In a similar analysis, Lee and Kim [10] argue that birth cohort membership has made the largest contribution to increasing suicide rates in South Korea. Results from these studies suggest that the underlying causes of increasing suicide mortality may differ in Japan and South Korea, despite geographic, cultural and demographic similarities shared by the two countries. To better understand increasing rates of suicide mortality in Japan and South Korea, this study extends previous research in three ways: First, whereas some prior research has focused on the effect of one temporal factor such as age [6] or secular changes [8] on suicide in these countries, we analyzed the distinct effects of all three time-related demographic factors (age, period, and birth cohort) on suicide in Japan and South Korea by applying the intrinsic estimator (IE)— an innovative method with desirable statistical properties for age-period-cohort (APC) modeling [11, 12]— to suicide mortality rates. Second, we extend the period of analysis to 2010; this is important because previous research on suicide in either Japan [9] or South Korea [10] using APC modeling is generally limited to the early 2000s, prior to South Korea's extraordinarily rapid increase in suicide mortality that led to its sharp divergence from other OECD nations, including Japan. Third, by comparing and contrasting the demographic factors that have shaped suicide mortality patterns over the last quarter century in Japan and South Korea, our study reveals commonalities and differences in these neighboring Asian countries, with attendant public health implications for each nation. Data and measures Analyzing rates of suicide mortality in South Korea and Japan between 1985 and 2010 required age- and sex-specific data on the number of suicide deaths (numerators) and population counts (denominators) over this period of observation. Cause-specific mortality data for each country's population (including the number of suicide deaths) was provided by Statistics Korea [13] and the Vital Statistics Survey [14], respectively. Population estimates were provided by national statistics organizations—Statistics Korea and, in Japan, the Ministry of Internal Affairs and Communications. After securing these data, we divided age from 10 to 80 into fourteen 5-year age groups. For example, the age group 10–14 refers to people aged 10.0 to 14. \( \overline{9} \). The youngest age groups (0–4 and 5–9) were excluded because suicide at these ages is extremely rare. We included a total of six periods of observation in our study, which were evenly spaced in 5-year intervals over the period 1985 to 2010. By subtracting age from the period of observation, we derived individuals' birth cohort membership, which we also grouped into 5-year intervals. We excluded people over the age of 80 from our analysis because both census data and cause-specific mortality data in Japan and South Korea provide an open-ended interval for the eldest age group, making it impossible to derive 5-year birth cohorts. Consequently, we evaluated suicide mortality data for nineteen separate cohorts born between 1905 and 2000 across the time period 1985 to 2010. Classic age-period-cohort models Age-Period-Cohort (APC) models have been developed to estimate effects for all three temporal factors in studies of cause-specific mortality rates. In the generalized linear APC model, suicide mortality can be written in log-linear regression form [15]: $$ \log \left({E}_{ijk}\right)= \log \left({P}_{ijk}\right)+\mu +{\alpha}_{\mathrm{i}}+{\upbeta}_{\mathrm{j}}+{\upgamma}_{\mathrm{k}} $$ where E ijk denotes the expected number of suicides which is assumed to follow the Poisson distribution, for the ith age group at the jth period of observation and the kth birth cohort for i = 1,…,a, j = 1,…,p and k = 1,…,(a + p-1); P ijk denotes the size of the population for each age-period-cohort combination, which is referred to as the offset term in the log-linear regression; μ is the intercept; α i is the age effect in the ith age group; β j is the period effect in the jth observation period; γ k is the cohort effect for the kth birth cohort, where k = a-i + j [12, 16, 17]. Equation (1) is under-identified in traditional ordinary least square (OLS) regression analysis of tabular data because the three temporal factors are linearly related to each other (age = period-cohort), leading to the design matrix X of one less than full rank.Footnote 1 Consequently, a unique solution does not exist and the model has an infinite set of A, P, and C estimates. This under-identification issue is a limitation inherent to classic APC accounting models [18]. Traditional constrained generalized linear models (CGLIM) address APC model under-identification by imposing an equality constraint, usually on two adjacent age, period or cohort estimates [19]. A limitation of CGLIM analysis is that model estimates are often sensitive to the choice of equality constraint [16, 20], leading researchers to rely on strong a priori assumptions based on theoretical considerations or other external information [18]. However, unless these assumptions are completely sound, there is substantial risk for model misspecification [16, 19, 21, 22]. With these challenges in mind, we adopted the intrinsic estimator (IE) method for APC modeling [12], which identifies a constraint not dependent on a priori theoretical assumptions of age, period, or cohort effects, but rather the number of age (a) and period (p) categories.Footnote 2 We fit IE models to sex- and country-specific suicide mortality data using the apc_ie module in Stata 13 [23].Footnote 3 Figure 2 displays suicide mortality rates per 100,000 person-years lived (PYL) for different age groups, periods of observation, and birth cohorts. We calculated these rates by dividing the number of suicide deaths by the population for each age group, period of observation, and birth cohort, and then multiplied by a constant of 100,000 to ease interpretation. In both countries, suicide rates are very low at young ages, but by age 30–34 increase to 18.7 per 100,000 PYL in Suicide mortality rates by age, period and birth cohort in Japan and South Korea, 1985–2010 Japan and 17.0 per 100,000 PYL in South Korea. Suicide rates continue to increase through midlife in both countries, but the pace of increase is especially rapid in Japan, where rates peak at age 55–59. Conversely, in South Korea suicide rates increase steadily with age across the entire life course. When disaggregated by gender, it becomes clear that age-specific rates of suicide are far higher among men than women in both countries (Fig. 3). In addition, among women in both countries, suicide rates are low and fairly stable from the 20s through midlife, but subsequently increase during later phases of the life course. This indicates that the midlife peak in suicide mortality in Japan's general population is driven by distinct suicide patterns among Japanese men. Sex-stratified suicide mortality rates by age, period and birth cohort in Japan and South Korea, 1985–2010 In the initial period of observation (1985), suicide mortality rates were about two times higher in Japan than South Korea (Fig. 2). Suicide then declined in 1990 in both countries, before rising again later that decade. In Japan, suicide rates spiked between 1995 and 2000, then stabilized at about 26 per 100,000 PYL (26.1 in 2000, 26.2 in 2005 and 25.5 in 2010). Conversely, suicide increased sharply in South Korea after 2000, reaching 35.9 per 100,000 PYL in 2010. Suicide mortality was higher among men than women in each period (Fig. 3). In Japan, suicide rates were fairly stable among women during the entire period of observation, revealing that the spike in suicide in the late 1990s was caused almost entirely by men. Despite very different levels of suicide mortality, men and women in South Korea exhibited similar trends between 1985 and 2010. Cohorts born between 1925–29 and 1995–99 tended to show remarkably similar levels of suicide mortality in Japan and South Korea (Fig. 2). However, members of older Japanese birth cohorts exhibited much higher suicide rates than their South Korean counterparts. In the oldest birth cohort in our study (1905–09), suicide rates were well over 50 per 100,000 PYL in Japan, but only about 15 per 100,000 PYL in South Korea. Men and women in South Korea showed generally similar patterns of suicide mortality across birth cohorts (Fig. 3). In Japan, cohort-based trends differ by sex; whereas women showed a steady decline in suicide from the oldest to the youngest birth cohorts, men exhibited a decline for cohorts born between 1905 and the mid-1920s, followed by an increase for cohorts born through the mid-1940s, and then another decline. Trends among birth cohorts reflect those observed among different age groups, demonstrating the need to disentangle age, period and cohort effects through multivariate modeling, which we turn to next, APC estimates from IE models In Fig. 4, we present results from IE models for Japan and South Korea. These results indicate that the age groups most vulnerable to suicide differ across countries. In Japan, the effects of age were strongest in the middle-aged population, particularly among males in their fifties (β male = 0.69, std. error = 0.036). For both males and females in Japan, suicide mortality increased steadily until age 50–54 before declining among males and leveling off among females. Conversely, in South Korea age had little effect on suicide mortality between ages 20 and 59 and the patterning of age coefficients was flat for both men and women. After age 60, the importance of age increased steadily for both men and women, peaking at the oldest age group (75–79) included in our analysis (β total = 0.82, std. error = 0.075). Estimates from intrinsic estimator models of age, period, and cohort effects on suicide mortality in Japan and South Korea, 1985–2010 (Shaded areas are 95% CIs) After a brief period of decline in suicide mortality between 1985 and 1990, period effects increased over the next decade in both countries (Fig. 4). In Japan, this increase was limited to the 1990s; period effects stabilized at an elevated level after 2000. The estimated period effects were similar among Japanese men and women, though the range in variation was much smaller among women. Unlike Japan, period effects in South Korea continued to increase in the new millennium, with rapid growth between 2000 and 2005 and an even stronger effect in 2010 (β total = 0.77, std. error = 0.029). Both the magnitude and patterning of period effects was similar for men and women in South Korea. Estimated cohort effects on suicide in Japan and South Korea reveal contrasting birth cohort patterns in these countries' populations (Fig. 4). In Japan, the strongest effects were observed for the 1905–09 birth cohort (β total = 1.00, std. error = 0.102), the eldest birth cohort in our analysis. These effects steadily weakened but remained significantly elevated for Japanese cohorts born between 1910 and 1924, before stabilizing near zero for several birth cohorts thereafter. Subsequently, cohorts born between 1945 and 1984 exhibited significantly lower rates of suicide mortality than other cohorts in Japan. This array of effects contrasts sharply with South Korea, where cohort effects were highest between the 1930s and the 1960s—corresponding roughly to the population born between Great Depression and the aftermath of the Korean War. Also different from the cohort patterns in Japan, the most recent birth cohorts in South Korea showed substantially lower rates of suicide mortality. Within each country, men and women exhibited similar patterns of cohort effects, though these effects were somewhat muted among women in South Korea. Our study indicates that suicide mortality risks vary substantially across all three temporal dimensions (age, period, and birth cohort) in Japan and South Korea. According to Pampel [24], variation in suicide rates across demographic groups is an indicator of relative social and economic wellbeing, and related work has highlighted the continued importance of social integration during times of rapid change [5]. For instance, elevated odds of suicide mortality among middle-aged Japanese males suggest that this group might be subject to high levels of social stress, perhaps related to career demands at this stage of the life course. Similarly, high levels of suicide among Japanese cohorts born prior to 1920 imply that this group experienced significant social and/or economic strain, perhaps related to hardships induced by the Great Depression in early life, followed by World War II and reconstruction during this cohort's midlife. While elevated odds of suicide among certain demographic groups raise important public health concerns, they do not necessarily indicate that these groups have contributed materially to changes in suicide mortality over our period of observation. For example, over the past 20 years the percentage of Japan's population between the ages of 40–65 has been remarkably stable (34.2 % in 1990; 34.4 % in 2000; and 34.0 % in 2010). Because this high-risk group has not increased in relative size between 1990 and 2010, the strong age effects associated with this group are not responsible for overall increases in Japan's suicide rate. Also, despite alarmingly high suicide rates among Japanese cohorts born prior to 1920, these cohorts did not contribute to increasing general suicide rates because they comprise a rapidly diminishing fraction of Japan's total population. Conversely, while period effects in Japan were small in magnitude compared to the age and cohort effects, they nevertheless contributed to the general rise in suicide mortality because they were equally distributed across the entire Japanese population. Moreover, because period effects stabilized at a relatively high level between 2000 and 2010, overall rates of suicide mortality also stabilized at an elevated level and were not off-set by either age or cohort effects that have impacted certain subgroups in the Japanese population. Some studies have argued that increasing rates of suicide mortality in Japan were triggered by the economic crisis of the late 1990s [8]. Our results are clearly consistent with this argument and contrary to some previous claims that strong age or cohort effects are responsible for recent increases in suicide mortality in Japan [9]. Whereas modest period effects are primarily responsible for the 20 % increase in rates of suicide mortality in Japan between 1985 and 2010, multiple factors are responsible for the much larger 280 % increase in suicide mortality in South Korea. Age effects were particularly strong in the over-60 age group, which has comprised a fast-growing share of the total population in recent decades—from 6.8 % in 1985 to 15.9 % in 2010. Also, cohort effects in South Korea were strongest among persons born between 1920 and 1960, and these birth cohorts also make up a sizable fraction of South Korea's total population. Finally—and perhaps most importantly—period effects on South Korea's suicide rates have increased sharply and steadily since 1990. This means that suicide risks for the entire population of South Korea have increased markedly in the past 20 years. Thus, unlike trends in Japan's suicide rates, which largely reflected period effects, the trends in South Korea's suicide rates reflect the combined effects of age, birth cohort, and shared period effects in the population. Previous research has emphasized high suicide rates among the elderly as well as cohort effects as the main reasons for the steep rise in suicide mortality rates in South Korea over the past 20 years [25]. In some regards our study supports those conclusions. However, our findings also point to recent secular changes—perhaps associated with rapid industrialization and population growth—in South Korea as a major contributor to increasing rates of suicide mortality. Policy implications High rates of suicide have led to the development of suicide prevention programs in both Japan and South Korea. For example, in 2001 the Ministry of Health, Welfare and Labour in Japan launched a comprehensive suicide prevention campaign [26]. In line with this project, the Japanese government enacted the Community-based Prefectural Emergency Fund for Suicide Prevention in June 2009. Meanwhile, in South Korea, a national campaign for suicide prevention was launched in 2001 to improve public awareness of suicide and identify individuals who exhibit suicidal ideation. Recently, the South Korean government also established the Korean Association for Suicide Prevention and took measures to enforce the anti-suicide law in 2012. Although we lack clear evidence on the efficacy of these suicide prevention programs, persistently high suicide mortality rates in Japan and South Korea indicate that more effective action is required. This is particularly true in South Korea where suicide rates appear poised to increase even further in the future without urgent public health action. One way to improve the effectiveness of public health efforts may be to direct limited public health resources towards the highest risk populations identified in our study. For example, while our study supports the call for renewed policy attention toward middle-aged males in Japan [9], it also shows that suicide risks are significantly elevated for women in this age range. Similarly, interventions that are specifically tailored for elderly individuals in South Korea could help ameliorate the very high rates of suicide mortality observed in this subpopulation. In South Korea, it is paramount to identify the secular changes that have led to drastic increases in suicide mortality over the past two decades. Prior studies have investigated the influence of specific secular changes such as economic recession [8], unemployment [8], and divorce rates [6]. However, because any one of these factors is unlikely to fully explain the pattern of period effects in South Korea it may prove fruitful to investigate the combined effects of—and potential interactions between—various secular changes. Also, in South Korea serious challenges are posed by high suicide rates among cohorts born during the tumultuous era spanning the Great Depression and the Korean War. As members of these cohorts age, they may be subject to a form of "double jeopardy" as cohort-based experiences and age-related processes combine to increase the risk of suicide. The efficacy of policies aimed at the elderly in South Korea may improve by accounting for the difficult experiences shared by members of these birth cohorts. Strengths, limitations and future directions To our knowledge, our study is the first to compare and contrast temporal dimensions of suicide in Japan and South Korea by applying the same statistical methods to data covering identical periods of observation. This approach led to the discovery that age, period, and cohort effects have made distinct contributions to changing rates of suicide mortality in these countries, despite their geographic proximity and certain cultural similarities. Differences in the demographic factors that have shaped suicide mortality in Japan and South Korea suggest that each country will benefit from locally applicable suicide prevention programs and also through additional research that can shed further light on the heterogeneity of factors affecting suicide in each country. Importantly, our study also extends previous APC research by including nearly a decade of data previously unexamined in South Korea. Given rapid increases in suicide rates in South Korea in recent years, our findings provide a much needed update on this issue and should provoke concern and further action among researchers and policymakers alike. Despite these strengths, our study is limited by its inability to pinpoint precisely which social and economic factors are responsible for the age, period, and cohort effects that we observe. Given the data available to address the specific research objectives in this study (i.e., vital statistics and census data), we did not have access to socioeconomic or other potentially useful information (e.g., marital status, urban/rural residence, or familial and social integration) that could further improve our understanding of suicide mortality in Japan and South Korea. Future studies should build upon the results of our investigation by attempting to identify these factors. For example, contingent on the availability of individual-level suicide data (e.g., in a prospective cohort study where mortality is carefully monitored over a long period of observation), researchers could explore hierarchical APC models, which would facilitate the simultaneous examination of both macro-level factors (e.g., unemployment rates) that may underlie period or cohort effects, as well as micro-level factors (e.g., depression) that are more proximate determinants of suicide [5]. Whichever research strategies are adopted, it is essential that future research continue to investigate the determinants of suicide in Japan and South Korea. This should be an urgent national priority in each country, as a deeper understanding of these issues will lead more effective policies, ultimately reducing the burden of suicide and improving population health for all. In spite of cultural, demographic and geographic similarities in Japan and South Korea, the underlying causes of increased suicide mortality rates in these two countries differ. In Japan, the increase in suicide mortality is primarily driven by modest period effects, initiated during the Asian financial crisis of the late 1990s. In South Korea, the temporal factors that underlie the stark increase in suicide mortality are more complex, including recent secular changes, pronounced age effects among the elderly individuals, and strong cohort effects for those born between the Great Depression and the aftermath of the Korean War. Our findings suggest that public health responses should be tailored to fit each country's unique situation. The institutional review board at Utah State University (USU) determined that this project does not qualify as human subjects research and is exempt from institutional oversight (USU Assurance: FWA#: 00003308, Protocol#: 4071). Consent to publish Data provided by Statistics Korea are publicly available at http://kosis.kr/statHtml/statHtml.do?orgId=101&tblId=DT_1B34E01&conn_path=I2&language=en. Data provided by Vital, Health and Social Statistics Division of Ministry of Health, Labour and Welfare in Japan are publicly available at http://www.stat.go.jp/english/data/chouki/02.htm. The second identification issue potentially applicable to tabular data arranged in an age-by-period matrix is that A, P and C are linearly related to the outcome variable, Y. However, because equation (1) specifies A, P and C as categorical rather than continuous variables, no assumptions are made regarding the functional form of associations between Y and these temporal dimensions [19]. In such a model, the second identification issue only holds when the categorical variables reveal exactly linear associations between A, P, C and Y. In practice, it is exceedingly rare to observe such perfectly arranged associations— and it is clearly not applicable in any of the APC models in our investigation. Because this process is only affected by a and p, some scholars have endorsed it as a non-arbitrary way to constrain the model and identify a unique set of APC estimates [20]. Moreover, while IE and CGLIM yield similar point estimates when a priori assumptions are valid, IE provides superior statistical efficiency [12]. According to Held and Riebler [27], rotation among the APC coefficients can occur without altering model fit by using different referent categories in IE models. The apc_ie module in Stata uses the last APC categories as reference groups by default; to check the extent of rotation, we estimated four additional models using ie_rate/ie_norm modules [28]. Each model used different A-P-C groups as referent categories. For one such model, we used the first groups (1-1-1), and for the other three models we used randomly selected groups (14-4-14, 9-4-3, 11-4-2) of A-P-C. Our results show negligible coefficient rotation, meaning that the patterning of APC effects was nearly identical across all five models. APC: age-period-cohort CGLIM: constrained generalized linear model IE: intrinsic estimator organization for economic cooperation and development PYL: person-years lived OECD. OECD Factbook 2014. OECD Publishing. 2014. Book Google Scholar Durkheim E. Suicide: A study in sociology (JA Spaulding & G. Simpson, trans.). Glencoe, IL: Free Press. (Original work published 1897); 1951. Stack S. Suicide: a 15‐year review of the sociological literature part I: cultural and economic factors. Suicide Life-threat. 2000;30(2):145–62. Stack S. Suicide: a 15‐year review of the sociological literature part II: modernization and social integration perspectives. Suicide Life-threat. 2000;30(2):163–76. Wray M, Colen C, Pescosolido B. The sociology of suicide. Annu Rev Sociol. 2011;37:505–28. Kim SY, Kim M-H, Kawachi I, Cho Y. Comparative epidemiology of suicide in South Korea and Japan: effects of age, gender and suicide methods. Crisis. 2011;32:5–14. Park BB, Lester D. Social integration and suicide in South Korea. Crisis. 2006;27(1):48–50. Chang S-S, Gunnell D, Sterne JA, Lu T-H, Cheng AT. Was the economic crisis 1997–1998 responsible for rising suicide rates in East/Southeast Asia? A time–trend analysis for Japan, Hong Kong, South Korea, Taiwan, Singapore and Thailand. Soc Sci Med. 2009;68(7):1322–31. Odagiri Y, Uchida H, Nakano M. Gender differences in age, period, and birth-cohort effect on suicide mortality rate in Japan 1985–2006. Asia Pac J Public Health. 2009;23(4):581–7. Lee J, Kim S. Suicide in Korea (in Korean). Korean J Soc. 2010;44:63–94. Fu WJ. Ridge estimator in singular design with application to age-period-cohort analysis of disease rates. Commun Stat-Theor M. 2000;29(2):263–78. Yang Y, Fu WJ, Land KC. A methodological comparison of age‐period‐cohort models: the intrinsic estimator and conventional generalized linear models. Sociol Methodol. 2004;34(1):75–110. Statistics Korea. Vital Statistics. Statistics Korea; 1985–2010. 2010. Vital Health and Social Statistics Division. Vital Statistics. Ministry of Health, Labour and Welfare, Japan; 1985–2010. 2010. Yang Y, Land KC. Age-period-cohort analysis: New models, methods, and empirical applications: CRC Press. 2013. Kupper LL, Janis JM, Karmous A, Greenberg BG. Statistical age-period-cohort analysis: a review and critique. J Chron Dis. 1985;38(10):811–30. Yang Y. Trends in US adult chronic disease mortality, 1960–1999: Age, period, and cohort variations. Demography. 2008;45(2):387–416. Fu WJ. A smoothing cohort model in age–period–cohort analysis with applications to homicide arrest rates and lung cancer mortality rates. Sociol Method Res. 2008;36(3):327–61. Mason KO, Mason WM, Winsborough HH, Poole WK. Some methodological issues in cohort analysis of archival data. Am Sociol Rev. 1973;38:242–58. Fu WJ, Land KC, Yang Y. On the intrinsic estimator and constrained estimators in age-period-cohort models. Sociol Method Res. 2011:0049124111415355. Smith HL, Mason WM, Fienberg SE. Estimable functions of age, period, and cohort effects: more chimeras of the age-period-cohort accounting framework: comment on Rodgers. Am Sociol Rev. 1982;47(6):787–93. Mason WM, Wolfinger NH. Cohort analysis. California Center for Population Research. 2001. Yang Y, Land KC. Age–period–cohort analysis of repeated cross-section surveys: fixed or random effects? Sociol Method Res. 2008;36(3):297–326. Pampel FC. Cohort size and age-specific suicide rates: A contingent relationship. Demography. 1996;33(3):341–55. Kwon J-W, Chun H, Cho S-I. A closer look at the increase in suicide rates in South Korea from 1986–2005. BMC Public Health. 2009;9:1. Chiu H, Takahashi Y, Suh G. Elderly suicide prevention in East Asia. Int J Geriatr Psych. 2003;18(11):973–6. Held L, Riebler A. Comment on" Assessing Validity and Application Scope of the Intrinsic Estimator Approach to the Age-Period-Cohort (APC) Problem". Demography. 2013;50(6):1977. Powers D. IE_RATE: Stata module to conduct age, period, and cohort (APC) analysis of tabular rate data using the intrinsic estimator. 2014. We thank to the Department of Sociology, Social Work, and Anthropology, and the Merrill-Cazier Library at Utah State University for their support. We also gratefully acknowledge support provided by the NICHD-funded University of Colorado Population Center (Project 2P2CHD066613-06). This study had no funding source. Department of Sociology and Yun Kim Population Research Laboratory, Utah State University, 0730 Old Main Hill, 84322, Logan, UT, USA Sun Y. Jeon & Eric N. Reither Department of Sociology, University of Colorado Boulder, UCB 327 Ketchum Hall 264, 80309, Boulder, CO, USA Ryan K. Masters Sun Y. Jeon Eric N. Reither Correspondence to Sun Y. Jeon. SYJ conceived the study, collected data, and drafted the manuscript. ENR led the design of the study and helped draft and edited the manuscript. SYJ and RKM conducted the statistical analyses. All the authors contributed to interpretation of results and revision of the manuscript. All the authors read and approved the final manuscript. Jeon, S.Y., Reither, E.N. & Masters, R.K. A population-based analysis of increasing rates of suicide mortality in Japan and South Korea, 1985–2010. BMC Public Health 16, 356 (2016). https://doi.org/10.1186/s12889-016-3020-2 Received: 10 August 2015 Age-period-cohort analysis, Intrinsic estimator model, Japan
CommonCrawl
Power Series Khan Academy The Duke and Duchess of Cambridge have arrived for a meeting with the Aga Khan in London. The Better Money Habits and Khan Academy partnership was created to provide financial education, career planning and other online learning resources. Also know as The Academy Introductions (The Ghost Bird, #1), Meeting Sang: Kota (The Ghost Bird, #1. Types of Problems There are three types of problems in this exercise: Find the nth term in the. By inspection, it can be difficult to see whether a series will converge or not. 2019 School Grades: Somerset Neighborhood School : A ***Somerset Academy Preparatory Middle School : C***Somerset Academy Miramar High School : B***Somerset Neighborhood School is a Florida School of Excellence. The important technique of solving linear differential equations with polynomial coefficients by means of power series is postponed to the next book in this series, Calculus 3c-4. That's the reason 1 / (1 - x) = ∑{n=1,∞} xⁿ. Darwin's grand idea of evolution by natural selection is relatively simple but often misunderstood. What is the first term here? Well, the first term is, well, when n is equal to one, the first term here is 1/4 to the second power. Finding a power series to represent x³cos(x²) using the Maclaurin series of cos(x). Maximum Power Transfer Theorem. Beast Academy is a challenging curriculum for students in grades 2‑5. A power series is an infinite polynomial where each term is of the form (asubk)*(x - c)^k, where asubk are the coefficients and c is the center. It is the source of formulas for expressing both sin x and cos x as infinite series. For both series, the ratio of the nth to the (n-1)th term tends to zero for all x. Altitude of a Prism. In a power series, the. Almost 2 years later and we still do not discuss the root test :/. TAYLOR SERIES, POWER SERIES The following represents an (incomplete) collection of things that we covered on the sub-ject of Taylor series and power series. Darwin's grand idea of evolution by natural selection is relatively simple but often misunderstood. The high value of current at resonance produces very high values of voltage across the inductor and capacitor. High School. 1 Crime Hunter 2018 TV-14 1 Season TV Shows From fake marriages to online shopping fraud, a whip-smart police officer investigates truly baffling cons that were inspired by true stories. What I want to do in this video is now think about the sum of an infinite geometric series. Power series have coefficients, x values, and have to be centred at a certain value a. Operations on power series. ) Let be a power series. Arithmetic Series. For more information, visit www. Our interactive practice. If you don't recall how to do this take a quick look at the first review section where we did several of these types of problems. Você tem que admitir que isso é muito legal. In a world replete with greed, betrayal, sexual intrigue and rivalry, "Marco Polo" is based on the famed explorer's adventures in Kublai Khan's court in 13th century Mongolia. Mission-driven organization representing over 6,000 of the world's leading colleges, schools, and other educational organizations. Se você está vendo esta mensagem, significa que estamos tendo problemas para carregar recursos externos em nosso website. Si vous voyez ce message, cela signifie que nous avons des problèmes de chargement de données externes. To find it, we employ various techniques. A Khan Academy é uma organização sem fins lucrativos com a missão de oferecer ensino de qualidade gratuito para qualquer pessoa, em qualquer lugar. As members of Autodesk Design Academy, students and educators can access free design software, along with free educational resources in manufacturing, construction, and production—including self-paced online courses, guided webinars, tutorials, industry contests, class projects, and curriculum for all levels. Since geometric series are a class of power series, we obtained the power series representation of a/(1-r) very quickly. We believe learners of all ages should have unlimited access to free. It is the source of formulas for expressing both sin x and cos x as infinite series. We will call the radius of convergence L. He shows the power … Continue reading →. ie/vodfeeds/feedgenerator/lastchance/?limit=24 http://dj. Darwin's grand idea of evolution by natural selection is relatively simple but often misunderstood. Fantastic TED talk by Salman Khan, in which in speaks about: "…how and why he created the remarkable Khan Academy, a carefully structured series of educational videos offering complete curricula in math and, now, other subjects. Operations on power series. So this is a power series in x, centred at x = 0, it has radius of convergence R = 1, and its interval of convergence is the open interval ( 1;1). The Duke and Duchess of Cambridge have arrived for a meeting with the Aga Khan in London. Series Solutions: Taking Derivatives and Index Shifting Throughout these pages I will assume that you are familiar with power series and the concept of the radius of convergence of a power series. Taylor series is a special power series that provides an alternative and easy-to-manipulate way of representing well-known functions. Example: to say the shape gets moved 30 Units in the "X" direction, and 40 Units in the "Y" direction, we can write:. With Lorenzo Richelmy, Benedict Wong, Joan Chen, Remy Hii. Be prepared to prove any of these things during the exam. Example: Represent f ( x ) = 1/(1 + x 2 ) by the power series inside the interval of convergence, graphically. A p-series can be either divergent or convergent, depending on its value. Altitude of a Parallelogram. I will explain/illustrate, so please bear with me: For any consecutive integers, starting at 1 until any number greater than 1, if 3, 3^2 = 1 + 2 + 3 + 2 + 1. Find your yodel. You have to admit this is pretty neat. Sign in to review and manage your activity, including things you've searched for, websites you've visited, and videos you've watched. Which gets the most power and why? 6. To find out how it works, imagine a population of beetles: There is variation in. Khan Academy has been translated into dozens of languages, and 100 million people use our platform worldwide every year. Calculus II For Dummies, 2nd Edition. Return to the Power Series starting page. Sign in to review and manage your activity, including things you've searched for, websites you've visited, and videos you've watched. Pólya conjectured that if a function has a power series with integer coefficients and radius of convergence 1, then either the function is rational. Shah Rukh Khan (born 2 November 1965), also known by the initialism SRK, is an Indian actor, film producer, and television personality. Altitude of a Cylinder. The Binomial Theorem. If you have any questions, let me know in the comments. Sum of series. About Khan Academy: Khan Academy is a nonprofit with a mission to provide a free, world-class education for anyone, anywhere. The function associated with is differentiable in the disc of convergence, and the function represented by agrees with on the disc of convergence. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function. A Khan Academy é uma organização sem fins lucrativos com a missão de oferecer ensino de qualidade gratuito para qualquer pessoa, em qualquer lugar. Suppose that we have a series where. Khan Academy has recently partnered with the College Board, the makers of the SAT, to produce free and official SAT prep materials. Our interactive practice. In a previous video, we derived the formula for the sum of a finite geometric series where a is the first term and r is our common ratio. View Sajid Khan's profile on LinkedIn, the world's largest professional community. See how it's done in this video. Series Solutions: Taking Derivatives and Index Shifting Throughout these pages I will assume that you are familiar with power series and the concept of the radius of convergence of a power series. It features nice, colorful graphics, it's easy and fun to play and it's the perfect pastime for short breaks – in fact, it tends to get repetitive if you play it for too long. You can't beat the value of a Christian education at AOA. Power series are infinite series of the form Σaₙxⁿ (where n is a positive integer). School Yourself. The reader is also referred to Calculus 3b. This is the front page of the Simple English Wikipedia. As the names suggest, the power series is a special type of series and it is extensively used in Numerical Analysis and related mathematical modelling. The difference is that the convergence of the series will now depend upon the values of \(x\) that we put into the series. In part b), we found that the limit was zero, so the series converged for all x. Watch Netflix movies & TV shows online or stream right to your smart TV, game console, PC, Mac, mobile, tablet and more. Khan Academy is a 501(c)(3) nonprofit organization. Types of Problems There is one type of problem in this exercise: Find the derivative or. Our starting point in this section is the geometric series: X1 n=0 xn = 1 + x+ x2 + x3 + We know this series converges if and only if jxj< 1. To find out how it works, imagine a population of beetles: There is variation in. Join today to get access to thousands of courses. Pre-algebra and algebra lessons, from negative numbers through pre-calculus. To find out how it works, imagine a population of beetles: There is variation in. Abeka Academy is accredited by the Middle States Association of Colleges and Schools Commissions on Elementary and Secondary Schools (MSA CESS) [3624 Market Street, Philadelphia, PA 19104; Telephone: (267) 284-5000; e-mail: [email protected] Find all values of x for which a power series converges. Welcome to My Activity. It is a great resource (and an inexpensive one) to learn SQL and get the language mastered. Representation of Functions as Power Series. Such a polynomial is called the Maclaurin Series. For series convergence determination a variety of sufficient criterions of convergence or divergence of a series have been found. Finding function from power series by integrating. Find more Mathematics widgets in Wolfram|Alpha. com will indicate this with a relevant message. Here is a brief listing of the topics in this chapter. Introduction to Power Series. The second series already has the proper exponent and the first series will need to be shifted down by 2 in order to get the exponent up to an \(n\). If you're seeing this message, it means we're having trouble loading external resources on our website. You have to admit this is pretty neat. Infinite series can be daunting, as they are quite hard to visualize. Does this series converge? This is a question that we have been ignoring, but it is time to face it. And it doesn't matter whether the multiplier is, say, 100, or 10,000, or 1/10,000 because any number, big or small, times the. Free Taylor/Maclaurin Series calculator - Find the Taylor/Maclaurin series representation of functions step-by-step. Fantastic TED talk by Salman Khan, in which in speaks about: "…how and why he created the remarkable Khan Academy, a carefully structured series of educational videos offering complete curricula in math and, now, other subjects. Power series of the form Σk(x-a)ⁿ (where k is constant) are a geometric series with initial term k and common ratio (x-a). Taylor's theorem (actually discovered first by Gregory) states that any function satisfying certain conditions can be expressed as a Taylor series. By inspection, it can be difficult to see whether a series will converge or not. Every Taylor series provides the. If you have any questions, let me know in the comments. Abeka Academy is accredited by the Middle States Association of Colleges and Schools Commissions on Elementary and Secondary Schools (MSA CESS) [3624 Market Street, Philadelphia, PA 19104; Telephone: (267) 284-5000; e-mail: [email protected] PatrickJMT: making FREE and hopefully useful math videos for the world! Get my latest book. Technically, the Maclaurin series for this can actually be derived from what you should have been taught as the power series based on #1/(1-x)#. Make sure you stop by and say hi. It has reinforced for me that teachers are some of the brightest and most talented people in the world. Analytic Methods. Return to the Power Series starting page. Use differentiation to find a power series representation for $$\frac{1}{(3+x)^{2}}$$ What is the radius of convergence, R? Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. Agreed, I'm taking calculus 2 currently and I've found watching Khan's videos before I read the section helps me understand. Usually, a given power series will converge (that is, approach a finite sum) for all values of x within a certain interval around zero—in particular,. Power series are useful in analysis since they arise as Taylor series of infinitely differentiable functions. Chapter 4 : Series and Sequences. The Industry's Best Value Learning Membership for IT and Project Management Professionals. La formule d'Euler est eⁱˣ=cos(x) + isin(x) et l'identité d'Euler est e^(iπ) = -1. Not even a mighty warrior can break a frail arrow when it is multiplied and supported by its fellows. Get the free "Series Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. Provides worked examples of typical introductory exercises involving sequences and series. Series and Sigma Notation 1 - Cool Math has free online cool math lessons, cool math games and fun math activities. "[A similar provider] wasn't helpful to us because the express goal was not to simply obtain cloud-related IT Certifications, but rather to obtain those Certifications as a result of expanding our cloud capabilities and practice. Power Series Functions. Better yet, trim for the flattest stable power-off glide (no porpoising). See how it's done in this video. See the complete profile on LinkedIn and discover Sajid's connections and jobs at similar companies. I've been working on something that inspired me to use a specific number series and I do not know what this series is called. Maclaurin Series. About Khan Academy: Khan Academy is a nonprofit with a mission to provide a free, world-class education for anyone, anywhere. TED began in 1984 as a conference where Technology, Entertainment and Design converged, and today covers almost all topics — from science to business to global issues — in more than 110 languages. In many situations c (the center of the series) is equal to zero, for instance when considering a Maclaurin series. We know how education works. About Us Stratford Academy is the oldest independent, non-sectarian, college preparatory school in Middle Georgia. The Electric academy is not just a blog but a community of journeymen and apprentices. Series and Parallel Circuits Working Together. In part b), we found that the limit was zero, so the series converged for all x. This is useful for analysis when the sum of a series online must be presented and found as a solution. For more information, visit www. use differentiation to find a power series representation for 1/(3+x)^2. Here is a set of practice problems to accompany the Power Series section of the Series & Sequences chapter of the notes for Paul Dawkins Calculus II course at Lamar University. Finding a power series. You can specify the order of the Taylor polynomial. The reader is also referred to Calculus 3b. About Khan Academy: Khan Academy is a nonprofit with a mission to provide a free, world-class education for anyone, anywhere. Calculus covers all topics from a typical high school or first-year college calculus course, including: limits, continuity, differentiation, integration, power series, plane curves, and elementary differential equations. Você tem que admitir que isso é muito legal. I was wondering if Khan Academy could maybe consider posting videos on the Fourier series and analysis in math? Community Help Center Report a Problem Help Center Community Report a Problem Sign in. Cooking Academy is a casual game in all senses. MIT covers power series and Taylor series in this module of their single variable calculus course; Khan Academy has a series (pun intended) on Taylor series. This has been a great supplement to my education. Sign in to review and manage your activity, including things you've searched for, websites you've visited, and videos you've watched. the series for , , and ), and/ B BB sin cos. resolving power synonyms, resolving power pronunciation, resolving power translation, English dictionary definition of resolving power. Power series are infinite series of the form Σaₙxⁿ (where n is a positive integer). Three resistors that are rated at 5 , 2 and 1 are connected in series to a battery. This simple algebraic manipulation allows us to apply the integral test. News, email and search are just the beginning. Power series are "approximate formulas" in much the same sense as finite ‐ precision real numbers are approximate numbers. In many situations c (the center of the series) is equal to zero, for instance when considering a Maclaurin series. Interval of convergence. Series and Sigma Notation 1 - Cool Math has free online cool math lessons, cool math games and fun math activities. You will learn. Types of Problems There are three types of problems in this exercise: Find the nth term in the. A power series may converge for some values of \(x\) and not for other values of \(x\). resolving power synonyms, resolving power pronunciation, resolving power translation, English dictionary definition of resolving power. Question: What are the practical applications of the Taylor Series? Whether it's in a mathematical context, or in real world examples. You can watch as much as you want, whenever you want without a single commercial – all for one low monthly price. Growth Mindset #2 - The magic of mistakes. You can't beat the value of a Christian education at AOA. Does this series converge? This is a question that we have been ignoring, but it is time to face it. use differentiation to find a power series representation for 1/(3+x)^2. On Thursday, the BJP's Udayanraje Bhosale faced the unexpected: a shock defeat in the Satara Lok Sabha bypoll in Maharashtra at the hands of Shriniwas Patil, a former governor of Sikkim and close aide of NCP supremo Sharad Pawar. Ratio Test: The ratio test is a test for determining whether a series converges. A power series is any series of the following form: Notice how the power series differs from the geometric series: In a geometric series, every term has the same coefficient. The function associated with is differentiable in the disc of convergence, and the function represented by agrees with on the disc of convergence. As we will see, the values of x for which a power series converges is always an interval. Then and have the same radius of convergence. Getting worse usually means the series goes to infinity for some values of x. Within its interval of convergence, the derivative of a power series is the sum of derivatives of individual terms: [Σf(x)]'=Σf'(x). By inspection, it can be difficult to see whether a series will converge or not. A power series is a function of the form: [math]f(x)=a_0+a_1x+a_2x^2+a_3x^3+\dots[/math] They are useful because of a combination of two things: 1) A lot of functions can either be written as a power series or approximated very conveniently. Providing researchers with access to millions of scientific documents from journals, books, series, protocols, reference works and proceedings. This may add considerable effort to the solution and if the power series solution can be identified as an elementary function, it's generally easier to just solve the homogeneous equation and use either the method of undetermined coefficients or the method of variation of parameters. Located just two blocks from the Liberty Bell and Independence Hall, it is the only museum devoted to the U. Differentiation and Integration. Members of the Candlecharts Academy love getting access to all of our training with just one password. If is too large, thenB B the series will diverge:. Power series is a sum of terms of the general form aₙ(x-a)ⁿ. Similarly, this tells us from a power series perspective that when x is between -1 and 1. How can you perform well on the new reading section of the SAT if you don't fully understand the language being used in the directions and in the questions?. TED is a nonpartisan nonprofit devoted to spreading ideas, usually in the form of short, powerful talks. There's an example there to help solidify the concepts taught. An important type of series is called the p-series. Typically, students practice by working through lots of sample problems and checking their answers against those provided by the textbook or the instructor. It opened its doors in September 1960 in the Overlook Mansion (now the Woodruff House) on Coleman Hill in historic downtown Macon. The reader is also referred to Calculus 3b. This exercise shows user how to turn a function into a power series. A vocabulary list featuring The New SAT: The Language of the Test. Prime Minister of Pakistan. As an accredited, independent, college-prep school for grades Pre-K through 12, Pioneer Academy is committed to enriching the lives of students, and ensuring their success in college and life. The procedure that Series follows in constructing a power series is largely analogous to the procedure that N follows in constructing a real ‐ number approximation. Despite the fact that you add up an infinite number of terms, some of these series total up to an ordinary finite number. As the names suggest, the power series is a special type of series and it is extensively used in Numerical Analysis and related mathematical modelling. Power series have coefficients, x values, and have to be centred at a certain value a. \s* [^]*\s* [^"]*)"\s/>\s*]]> http://dj. Step 3: Apply the Integral Test. So, the power series above converges for x in [-1,1). Not even a mighty warrior can break a frail arrow when it is multiplied and supported by its fellows. About Khan Academy: Khan Academy is a nonprofit with a mission to provide a free, world-class education for anyone, anywhere. TAYLOR SERIES, POWER SERIES The following represents an (incomplete) collection of things that we covered on the sub-ject of Taylor series and power series. Khan Academy. This may add considerable effort to the solution and if the power series solution can be identified as an elementary function, it's generally easier to just solve the homogeneous equation and use either the method of undetermined coefficients or the method of variation of parameters. Khan Academy Wiki is a FANDOM Lifestyle. How to generate power series solutions to differential equations. (*See Note) If you are currently enrolled in MPP courses or tracks, please complete them by December 31, 2019. It opened its doors in September 1960 in the Overlook Mansion (now the Woodruff House) on Coleman Hill in historic downtown Macon. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. Power Series Functions. I will assume here that the reader knows basic facts about calculus. Welcome to My Activity. For series convergence determination a variety of sufficient criterions of convergence or divergence of a series have been found. Power series, in mathematics, an infinite series that can be thought of as a polynomial with an infinite number of terms, such as 1 + x + x2 + x3 +⋯. As the names suggest, the power series is a special type of series and it is extensively used in Numerical Analysis and related mathematical modelling. 2) Enter a power-off dive at a 45-degree angle, then release the sticks, and observe the plane's behavior:. Finding a power series. This can be seen by fixing and supposing that there exists a subsequence such that is unbounded. Se você está vendo esta mensagem, significa que estamos tendo problemas para carregar recursos externos em nosso website. Operations on power series. 2019 School Grades: Somerset Neighborhood School : A ***Somerset Academy Preparatory Middle School : C***Somerset Academy Miramar High School : B***Somerset Neighborhood School is a Florida School of Excellence. TED began in 1984 as a conference where Technology, Entertainment and Design converged, and today covers almost all topics — from science to business to global issues — in more than 110 languages. And it doesn't matter whether the multiplier is, say, 100, or 10,000, or 1/10,000 because any number, big or small, times the. Complete Solution Before starting this problem, note that the Taylor series expansion of any function about the point c = 0 is the same as finding its Maclaurin series expansion. In very basic cases, power series evaluated at a point become geometric. For a regular singular point, a Laurent series expansion can also be used. The widget will compute the power series for your function about a (if possible), and show graphs of the first couple of approximations. Chapter 4 : Series and Sequences. In calculus, an infinite series is "simply" the adding up of all the terms in an infinite sequence. A series such as 3 + 7 + 11 + 15 + ··· + 99 or 10 + 20 + 30 + ··· + 1000 which has a constant difference between terms. where is a binomial coefficient and is a real number. Besides finding the sum of a number sequence online, server finds the partial sum of a series online. Find all values of x for which a power series converges. Alternating Series. As we will see, the values of x for which a power series converges is always an interval. Right? In fact, many aspects of learning — in homes, at schools, at work and elsewhere — are evolving rapidly, along with our. If the series does not converge, OnSolver. If you're behind a web filter, please make sure that the domains *. Great Minds is a non-profit organization founded in 2007 by teachers and scholars who want to ensure that all students receive a content-rich education. The Integration and differentiation of power series exercise appears under the Integral calculus Math Mission. I noticed the differential equations lectures stop after the Laplace Transformation sections. Step (2) There are only three criteria we need to check before applying the integral test. With Cooking Academy you can have fun while preparing recipes from all over the world – though only for a short time. Thus both series are absolutely convergent for all x. A Khan Academy é uma organização sem fins lucrativos com a missão de oferecer ensino de qualidade gratuito para qualquer pessoa, em qualquer lugar. Growth Mindset #3 - The power of YET. From there we can mix and match. Power series of ln(1+x³) (video) | Khan Academy We can represent ln(1+x³) with a power series by representing its derivative as a power series and then integrating that series. We use Simple English words and grammar here. The difference is that the convergence of the series will now depend upon the values of \(x\) that we put into the series. resolving power synonyms, resolving power pronunciation, resolving power translation, English dictionary definition of resolving power. The interval of converges of a power series is the interval of input values for which the series converges. A p-series can be either divergent or convergent, depending on its value. Power series is a sum of terms of the general form aₙ(x-a)ⁿ. The value of x determines the convergence or divergence of the series, meaning at certain x values the nth partial sum goes to infinity, and at other x values the nth partial sum actually goes to a number. Calculus is part of the acclaimed Art of Problem Solving curriculum designed to challenge high-performing middle and high school students. Find communities you're interested in, and become part of an online community! Press J to jump to the feed. The Taylor (or more general) series of a function about a point up to order may be found using Series[f, x, a, n]. For example, it's hard to tell from the formula that sin(x) is periodic. The Better Money Habits and Khan Academy partnership was created to provide financial education, career planning and other online learning resources. This exercise involves finding a number in the series. Learn Econometrics for free. More generally, a series of the form is called a power series in (x-a) or a power series at a. Analytic Methods. khanacademy. If a series doesn't converge, it's. This exercise creates power series from a given power series by derivatives and integrals. Find all values of x for which a power series converges. ', 'If you're afraid - don't do it, - if you're doing it - don't be afraid!', and 'an action comitted in anger is an action doomed to failure. This may add considerable effort to the solution and if the power series solution can be identified as an elementary function, it's generally easier to just solve the homogeneous equation and use either the method of undetermined coefficients or the method of variation of parameters. You can watch as much as you want, whenever you want without a single commercial – all for one low monthly price. Find communities you're interested in, and become part of an online community! Press J to jump to the feed. Shah Rukh Khan (born 2 November 1965), also known by the initialism SRK, is an Indian actor, film producer, and television personality. Angle of Depression. See how it's done in this video. This event marked the beginning of a new branch of mathematics, known as fractal. Use differentiation to find a power series representation for $$\frac{1}{(3+x)^{2}}$$ What is the radius of convergence, R? Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Why do we care what the power series expansion of sin(x) is? If we use enough terms of the series we can get a good estimate of the value of sin(x) for any value of x. Você tem que admitir que isso é muito legal. A power series is a series of the form where x is a variable and the c[n] are constants called the coefficients of the series. Until next time, stay classy Academy!. Power series have coefficients, x values, and have to be centred at a certain value a. It features nice, colorful graphics, it's easy and fun to play and it's the perfect pastime for short breaks – in fact, it tends to get repetitive if you play it for too long. So, the function 1/(1-x) can be represented as a power series for part of its domain. Analytic Geometry. Step (2) There are only three criteria we need to check before applying the integral test. which is valid for -1 X C, this gives the circuit a lagging power factor. Leah Weimerskirch, Achievement First, New Haven, Connecticut. khanacademy. Plane pulls sharply out of the dive as soon as you release the elevator: The plane is extremely nose-heavy. My class, and many other's, continue onto power series solutions of differential equations. Se você está vendo esta mensagem, significa que estamos tendo problemas para carregar recursos externos em nosso website. For series convergence determination a variety of sufficient criterions of convergence or divergence of a series have been found. In this section we give a brief review of some of the basics of power series. Learn for free! Why sign up?. Included are discussions of using the Ratio Test to determine if a power series will converge, adding/subtracting power series, differentiating power series and index shifts for power series. A community based initiative which was established in 2008 and has organised over 50 Sporting events for the Redditch Community and beyond, in aid of charitable causes Locally, Nationally & Internationally. Not even a mighty warrior can break a frail arrow when it is multiplied and supported by its fellows. Power Series: A power series is a sum of powers of a variable. Free Taylor/Maclaurin Series calculator - Find the Taylor/Maclaurin series representation of functions step-by-step. First, let f(x) be a continuous real valued function.
CommonCrawl
Towards key-frame extraction methods for 3D video: a review Lino Ferreira ORCID: orcid.org/0000-0003-0648-60671,2,4, Luis A. da Silva Cruz2,3 & Pedro Assuncao1,4 The increasing rate of creation and use of 3D video content leads to a pressing need for methods capable of lowering the cost of 3D video searching, browsing and indexing operations, with improved content selection performance. Video summarisation methods specifically tailored for 3D video content fulfil these requirements. This paper presents a review of the state-of-the-art of a crucial component of 3D video summarisation algorithms: the key-frame extraction methods. The methods reviewed cover 3D video key-frame extraction as well as shot boundary detection methods specific for use in 3D video. The performance metrics used to evaluate the key-frame extraction methods and the summaries derived from those key-frames are presented and discussed. The applications of these methods are also presented and discussed, followed by an exposition about current research challenges on 3D video summarisation methods. In the last years, new features have been implemented in video applications and terminal equipments due to users demand, who are always seeking for new viewing experiences more interactive and immersive, such as those provided by 3D video. This new visual experience is created by depth information that is part of 3D video and absent in classic 2D video. The inclusion of depth information in video signals is not a recent innovation, but the interest in this type of content and aspects related to it, such as acquisition, analysis, coding, transmission and visualisation, have been increasing recently [1, 2]. Lately, 3D video has been attracting attention from industry, namely content producers, equipment providers, distributors and from the research community mostly on account of the improvements in Quality of Experience that it provides to viewers [3], as well as due to the new business opportunities presented by this emerging multimedia format. In the past, video repositories were relatively small so that indexing and retrieval operations were easy to perform. More recently, the massification of 3D video and its applications have resulted in the generation of huge amounts of data, increasing the need for methods that can efficiently index, search, browse and summarise the relevant information with minimum human intervention. Furthermore, 3D video description and management is also required to enable quick presentation of the most important information in a user-friendly manner [4, 5]. Video summarisation is a video-content representation method that can fulfil these requirements. In contrast to summarisation of 2D video, which has been the subject of a significant amount of research, 3D video summarisation is still a relatively unexplored research problem which deserves more attention. A video summary is a short version of a full-length video that preserves the essential visual and semantic information of the original unabridged content. In the video summarisation process, a subset of key-frames or a set of shorter video sub-sequences (with or without audio) are chosen to represent the most important segments of the original video content according to predefined criteria [4]. This video content representation can be used in the promotion of movies, TV channels or other entertainment services. Video summarisation can also be used for content adaptation operations in constrained communication environments where bandwidth, storage capacity, decoding power or visualisation time is limited [6]. The literature defines two types of video summaries, namely those based on key-frames and those comprised of video skims [7]. A video summary based on key-frames is made up of a set of relevant frames selected from the video shots obtained from the original video. This type of summary is static, since the key-frames, being temporally distant and non-uniformly distributed, do not enable adequate rendering/reproduction of the original temporal evolution of the video content. Here, the video content is displayed in a quick and compact way for browsing and navigation purposes, without complying with timing or synchronisation requirements. Video skims are usually built by extracting the most relevant temporal segments (with or without audio) from the source sequence. After the extraction, all temporal segments are concatenated into sequential video with much shorter length than the source sequence. The methods used for computation of key-frames and video skims summaries are quite distinct, but these two types of representations for video content can be transformed from one to the other. A video skim can be generated from a key-frame summary by adding frames or segments that include the key-frames, while a key-frame summary can be created from a video skim by uniform sampling or selecting the most representative frame from each video skim segment [4]. In regard to 3D video content, a detailed study of the existing scientific literature reveals that comprehensive comparative studies of 3D video summarisation methods are missing. To help filling this gap, this paper presents a review of 3D video summarisation methods based on key-frames. This overview of the current state-of-the-art is mainly focused on the methods and features that are used to generate and evaluate 3D video summaries and not so much on the limitations or performance of specific methods. Since experimental set-ups, 3D video formats and features used for summarisation are considerably different from one computational method to another, a fair comparative analysis of the results, advantages and shortcomings of all methods is almost impossible. This paper also identifies open issues to be investigated in the area of 3D key-frame extraction for summarisation. The remainder of the paper is organised as follows. Section 1.1 presents the existing 3D video representation formats and relevant features for the purpose of summarisation; then, in Section 1.3, the generic framework normally used in 3D key-frame extraction methods is presented, after which Section 1.4 reviews the most important shot boundary detection (SBD) methods for 3D video. Then, Section 1.5 characterises the relevant methods used in 3D key-frame extraction for summarisation while Section 1.6 addresses common methods used for presentation of key-frames. Section 1.7 describes performance evaluation methods suitable for 3D video summaries based on key-frames, and Section 1.8 describes some applications of this kind of summaries. Section 1.9 discusses the prospects and challenges of the 3D key-frame extraction methods, and finally Section 2 concludes the paper. 3D video representation formats In this review article, '3D video', is defined as a representation format which differs from 2D video by the inclusion of information that allows viewers to perceive depth. This depth information can be conveyed either indirectly via two or more views of the scene (e.g. left and right views) or explicitly through either depth maps of geometric representation of connected 3D points and surfaces. The most common formats used used to represent 3D visual scenes include natural video and/or geometric representations. Stereoscopic video is composed of two slightly shifted video views of the same scene, where one corresponds to what would be observed by a left eye and the other by the right eye of a human observer. Since these are two views of the same scene, the corresponding images are related by the binocular disparity, which refers to the difference in the image plane coordinates of similar features captured in two stereo images. The scene depth is perceived from the disparity when using stereoscopic displays and can also be computed for different types of computer vision applications (e.g. measuring distances in 3D navigation). Multiview video (MVV) is composed of more than two video views shifted in the vertical and/or horizontal position. Typically, MVV acquisition is done using an array of synchronised cameras with some spatial arrangement, which capture the visual scene from different viewpoints. The MVV format is useful for applications supported by autostereoscopic displays with or without head tracking, which render a denser set of 3D views that are displayed through lenticular and parallax barriers. With this type of display, viewers are able to see the portrayed scene from different angles by moving the head along a horizontal plane. A typical application of this video format is freeview navigation where users are given the option of freely choosing the preferred viewpoint of the scene. Video-plus-depth (V+D) is composed of a video signal (texture) and respective depth map. Each value of the depth map represents the distance of the object to the camera for the corresponding pixel position. Typically, the depth information is quantised with 8 bits, where the closest point is represented with value 255 and the most distant point is represented with 0. Additional virtual views (i.e. not captured) of the same scene imaged can be synthesised from the V+D original information by using 3D warping transformations. Several different applications and services can benefit from the V+D format, due to its inherent backward compatibility with 2D video systems and higher compression efficiency achievable when compared to stereoscopic video. For instance, 3DTV services can be seamlessly deployed while maintaining compatibility with legacy 2D video services. Multiview video-plus-depth (MVD) is composed of video and depth maps for more than two views of the specific scene. The depth information can be computed from different views or captured directly using time-of-flight (ToF) sensors. MVD can be used to support dense multiview autotereoscopic display in a relatively efficient manner. From a relatively small set of different views and corresponding depth maps, a much larger set of views can be synthesised at the display side, avoiding coding and transmission of a great deal of data while enabling smooth transitions between viewpoints. Several emerging applications such as free viewpoint video and free viewpoint TV will use the MVD format due to its compact representation of 3D visual information. Mixed-reality applications and gaming are also important application fields for MVD. 3D computer graphics use a geometry-based representation, where the scene is described by a set of connected 3D points (or vertices), with associated texture/colour mapped onto them. The data content of this format can be organised into geometry, appearance and scene information [8]. The geometry includes the 3D position of vertices and polygons (e.g. triangles) that are constructed by joining these vertices. The appearance is an optional attribute which associates some properties (e.g. colour, texture coordinates) to the geometry data. Finally, the scene information includes the layout of a 3D scene with reference to the camera (or view), the light source and description of other 3D models if they are present in the scene. 3D computer graphics can provide better immersive and interactive experience than conventional 2D video, since the user is provided more freedom to interact with the content and get a realistic feeling of 'being there'. Relevant applications can be found in quite different fields, such as medicine, structural engineering, automobile industry, architecture and entertainment. Plenoptic video is composed of a very large of the number of views (e.g. hundreds or thousands) captured simultaneously. This multiple view acquisition process can be interpreted as a partial sampling of the plenoptic function [9], which represents not only spatial or temporal information but also angular information of about the captured light rays, i.e. captures a segment of the whole observable scene represented by a light field. In practice a 3D plenoptic image is captured by a normal image sensor placed behind an array of uniformly spaced semi-spherical micro-lenses. Each micro-lens works as an individual low resolution camera that captures the scene from an angle (viewpoint) slightly different from that of its neighbours. Plenoptic video, also known as light field video, is an emerging visual data representation with known applications in computational photography, microscopy, visual inspection and medical imaging among others. 3D video features for summarisation The scene depth is the additional information that is either implicitly or explicitly conveyed by 3D video formats. Therefore, depth is also the signal component that mostly contributes to distinguish 3D video summarisation methods from those used for 2D video. One of the first works combining depth with features of 2D video to summarise 3D video was done by Doulamis et al. [10]. The authors proposed an algorithm jointly operating on both the depth map and the left channel image to obtain a feature vector for use in video segmentation and key-frame extraction methods. The feature vector including segment size, location, colour and depth. Another important feature is the depth variance associated with the temporal activity, which was used by Ferreira et al. in [11] for temporal segmentation of 3D video. The average stereo disparity per frame and temporal features such as image difference and histogram difference are computed and combined in feature vectors which in turn are used by a clustering algorithm to partitioning 3D video in temporal segments. Another work using frame intensity histogram distributions as features and the Jensen-Shannon difference to measure frame difference in feature space is presented in [12]. This is used to segment a video clip into shots, and then to choose key-frames in each one. More recently, Papachristou et al. in [13] also segmented 3D video using low-level features obtained from disparity, colour and texture descriptors computed from histograms and wavelet moments. An improved three-dimensional local sparse motion scale invariant feature transform descriptor is used in [14], for RGB-D videos, based on grey pyramid, depth pyramid and optical flow pyramids that are built for both colour frames and depth maps. Point features are determined with a SIFT descriptor are combined with depth information of the point as well as optical-flow derived motion information. Although this work is focused on gesture recognition, the features and similarity measures may also be used in key-frame extraction methods. This work presented in [15], uses vectors containing moments (mean, standard deviation, skewness and kurtosis) of signature profiles of blocks with variable size for the luminance and disparity frames. A descriptor of frame moments was developed for summarising stereoscopic video through key-frame extraction and also to produce stereoscopic video skims. Yanwei et al. in [16] proposed a multiview video summarisation method which uses low-level and high-level features. The low-level features are based on visual attribute of video, as colour histogram, edge histogram and wavelet while for high-level features the authors use the Viola-Jones as face detector in each frame. Geometric features are also relevant for temporal segmentation and extraction of key-frames in 3D visual information. For instance, Assa et al. in [17] proposed a method to produce an action synopsis of skeletal animation sequences for presenting motion in still images. The method selects key-frames based on skeleton joints and their associated attributes (joint positions, joint angles, joint velocities, and joint angular velocities). In [18], the authors also use geometric features, such as the number and location of vertices of surface, to produce video summaries of animation sequences. Other geometric features used by Jianfeng et al. in [19] to summarise 3D video are features vectors formed by the histograms of the vertices in the spherical coordinate system. A different type of features rely on 3D shape descriptors. For instance, Yamasaki et al. in [20] used the shape of 3D models to split the video in different motion/pose temporal segments. Relevant shape features, such as shape histograms, shape distribution, spin image and spherical harmonics were studied in [21], where the performance of shape similarity metrics is evaluated for applications in 3D video sequences of people. Since similarity measures are of utmost importance in summarisation this is a relevant work in the context of this paper. Another type of features used in [22] for key-frame extraction is based on deformation analysis of animating meshs while the vertices positions in mesh models and motion intensity were used by Xu et al. in [23] in temporal segmentation of 3D video. 3D key-frame extraction framework Summarisation of 3D video follows a generic processing chain that is extended from 2D video by considering the inherent depth and geometric information as relevant feature contributors for selecting the dominating content in 3D moving scenes. A possible approach is based on clustering by grouping similar frames according to some similarity measure [24], without any prior processing or feature extraction. However, a more generic and systematic approach that better suits the problem of 3D video summarisation follows the three-step framework of Fig. 1, where the entire video sequence is first divided into video shots based on scene transitions using an SBD method, followed by a key-frame extraction method applied to each video shot to extract the most representative frames, based on the specific properties of the video content and similarity measures. Finally, the extracted key-frames are either presented to the viewers or stored, following some predefined presentation structure. A conceptual framework for key-frame summarisation Following the conceptual framework shown in Fig. 1, the input video is segmented into video shots, mostly based on spatio-temporal criteria, but other criteria can be used such as based on motion [20, 25] or the combination of the temporal and depth features [11]. More details can be found in Section 1.4. After this segmentation, one or more key-frames are extracted from each video shot according to user-defined parameters, or based on specific requirements (in Fig. 1, only one key-frame is extracted). The most relevant key-frame extraction methods are presented in Section 1.5. Once the key-frames are extracted, they need to be presented in an organised manner for easy viewing during video browsing or navigation. In this framework, three key-frame presentation methods are described, static storyboard, dynamic slideshow and single image based on stroboscopic effect, but other methods can be found in the literature (see Section 1.6). The key-frame presentation methods are independent of the key-frame extraction operation and thus the same key-frame summary can be presented to viewers in different ways. Shot boundary detection In the recent past, development of SBD methods for 2D video received a lot of the attention from the video processing research community. However, very few works have investigated the SBD problem in the context of 3D video, especially taking into account depth information. Relevant surveys of video SBD methods with specific application in 2D video can be found in the literature [26–28]. In this section, we briefly introduce the main concepts behind these methods for 2D video. Then, the most promising and better performing SBD methods used for 3D key-frame extraction are explained in detail. A video segment can be decomposed into a hierarchical structure of scenes, video shots and frames, with the linear video first divided into video scenes, which may comprise one or more video shots (set of correlated frames). A video scene is defined as a set of frames which is continuous and temporally and spatially cohesive [29], while a video shot may also be defined by camera operations, such as zoom and pan. Thus, the video shot is the fundamental unit in the content structure of a video sequence. Since its size is variable, the identification of start and end of the video shots is done using specific SBD methods. Figure 2 presents a generic framework of a SBD method. While this framework is similar for both 2D and 3D video, the actual algorithms used for each type of content are not the same due to the difference in their relevant features. Firstly, the relevant visual features are computed, in general forming feature vectors for each video frame as described in Section 1.2. In the second step, the visual features of consecutive frames are compared using specific similarity measures some decision criteria are used to identify shot boundaries. The decision methods used to find shot boundaries can be based on static thresholds (as in Fig. 2), adaptive thresholds (thresholds depend on the statistics of the visual features used), B-splines fittings [30], support vector machines (SVM) [31] and K-means clustering [11]. The detection accuracy of SBD methods is improved by combining several visual features [32]. A generic diagram of SBD framework Video shot boundaries can be classified into two types: abrupt shot boundary (ASB) (as in Fig. 2) and gradual shot boundary (GSB), according to a certain classification of scene transition, which in general is related to content variation over time. This is common in 2D and 3D video, despite the fact that scene transition in 3D video may include depth changes besides the visual content itself. In ASB, the scene transition occurs over very few frames, usually a single frame defines the boundary. In the case of GSB, the transition takes place gradually over a short span of frames. The most common gradual transitions are fade-ins, fade-outs, dissolves and wipes [26–28]. A common problem in SBD is the correct discrimination between camera operations and object motion that originate the gradual transitions, since the temporal variation of the frame content can be of the same order of magnitude and take place over the same number of frames. This similarity of visual effects caused by camera operations and object motion can induce false detections of gradual shot boundaries. This problem is aggravated for video sequences with intense motion. SBD methods Doulamis et al. in [10] proposed a key-frame extraction method for stereo video which includes a SBD method. Here, the entire video sequence is divided into video shots using an algorithm based on the analysis of DC coefficients of compressed videos, following the solution proposed in [33]. More recently, Papachristou et al. in [13] presented a framework for stereoscopic video shot classification, that uses a well-known method designed for 2D video to segment the original stereoscopic video into shots [34]. However, this method was applied only to the colour channels of the videos to be summarised. Ferreira et al. [11] proposed an algorithm to detect 3D shot boundaries (3DSB) based on a joint depth-temporal criterion. The absolute frame difference and sum of absolute luminance histogram difference are used as the relevant measures in the temporal dimension, while in the depth dimension, the variance of depth in each frame is used. A K-means clustering algorithm that does not require training and does not use thresholds is applied to choose the 3DSB transition frames. Ferreira's method is independent of the video content and can be applied to 2D or 3D video shot boundary identification. In the case of the 2D video, absolute frame differences and sum of absolute luminance histogram difference are used. Some methods target segmentation of 3D mesh sequences using properties of 3D objects as the shape and motion/action (e.g. human body motion, raise hands) to detect the shot boundaries. Yamasaki et al. [20] proposed a temporal segmentation method for 3D video recordings of dances, which is based on motion speed, i.e. when a dancer/person changes motion type or direction, the motion becomes small during some short period and in some cases it is even paused for some instants, according to the type of dance. To seek the points where motion speed becomes small the authors used an iterative close point algorithm proposed in [35] which is employed in the 3D space (spherical coordinates). In contrast to conventional approaches based on thresholds, the authors devised a video segmentation scheme appropriate for different types of dance. In this scheme, each local minimum is compared with local maxima occurring before (l m a x bef ) and after (l m a x aft ) the local minima. When l m a x bef and l m a x aft are 1.1 times larger than the local minimum, a temporal segmentation point is declared to occur at the minimum location. Since the decision rule is not based on absolute values and thresholds, rather on relative values of extrema, it is more robust to data variation (like type of dance) and no empirically derived decision thresholds are used. Another method which uses the motion speed of the 3D objects was presented by Xu et al. [23]. To reduce computation time of motion information the authors used the point distance (DP) instead of vertices position in Cartesian coordinates. DP is defined as Euclidean distance between one fixed point and all 3D objects' vertices coordinates of each frame. Figure 3 shows the point distance for 2 frames of Batter's sequence. Before determination of scene transitions, the histogram of point distance of each frame is calculated. Point distance of the frames #38 and #39 of Batter sequence. Grey values means the point distance from (0,0,0) [23] To detect abrupt and gradual transitions of 3D video, the Euclidean distance between the histograms of point distance and three thresholds are used, where the threshold values were derived empirically. Ionescu et al. [36] used a histogram-based algorithm specially tuned for animated films to detect ASB. From GSB only fades and dissolves are detected, since they are the most common gradual transition. The GSB detection is done using a pixel-level statistical approach proposed by [37]. The authors proposed the Short Colour Change (SCC) detection algorithm to reduce the cut detection false positives. The SCC is the effect that accompanies short term frame colour changes, caused by explosion, lightning and flash-like visual effects. More recently, Slama et al. [38] proposed a method based on the motion speed to split a 3D video sequence into segments characterised by homogeneous human body movements (e.g. walk, run, and sprint). However, the author only considers as significant video shot transition indicators changes of type of movement. Here, video shots with small differences from previous shots and small number of frames are avoided. The motion segmentation used in this work is based on the fact that when humans modify the motion type or direction, the motion magnitude decreases significantly. Thus, finding the local minimum of motion speed can be used to detect the break point where human body movements changes and consequently to segment the entire video into shots. Evaluation metrics Three well-known performance indicators are used in the evaluation of the SBD methods for 2D video: recall rate (R), precision rate (P) [39] and accuracy measure F1 [40]. The computation of these values is based on the comparison of manual segmentation (ground-truth) and computed segmentation. If a ground-truth is available these metrics can be applied to 3D video SBD methods. Recall rate is defined as the ratio between the number of shot boundaries detected by an algorithm and the total number of boundaries in the ground-truth dataset (see Eq. (1)). Precision rate, computed according to Eq. (2), is defined as the ratio between the number of shot boundaries detected by an algorithm and the sum of this value with the number of false positives. F1 is a measure that combines P and R, see Eq. (3). $$ R={\frac{D}{D+D_{M}}} $$ $$ P={\frac{D}{D+D_{F}}} $$ $$ F1=\frac{2RP}{R+P} $$ where D is the number of shot boundaries correctly detected by the algorithm, D M is the number of missed boundaries and D F is the number of false detections. For good performance, the recall and precision rates should have values close to 1. The best performance is reached when F1 is equal to 1, while the worst occurs at 0. The recall rate, precision rate and measure F1 were used to evaluate the performance of temporal segmentation methods for 3D video in [11, 23, 38], while Yamasaki et al. [20] only used recall and precision rates in the evaluation process. Although, these 3D SBD methods used the same evaluation metrics, the comparison of the results and performance obtained from such SBD methods is not possible because different datasets were used. Since the major difference between 2D and 3D video is the implicit or explicit availability of depth information, the visual features used in the SBD methods for 3D video must take depth into account, i.e. the temporal segmentation must also consider depth information in order to use depth discontinuities in shot detection. Until now, most research works on SDB for 3D video, did not use the depth information in the detection process. For example, Doulamis et al. in [10] proposed a key-frame extraction method for stereo video which includes a SBD method. However, this algorithm does not take into account the depth information of the stereo video and it is only applied to one view of the stereo sequence, for instance the left view. Another drawback of Doulamis' work is the lack of performance evaluation of the proposed temporal segmentation method. A method to segment stereo video was proposed in [13], but the proposed procedure does not take depth into account either. In [20, 23, 38], the authors proposed SDB methods for 3D video, which are only applicable to 3D mesh models and require modifications to be used with most common pixel-based 3D video formats, like stereo or video-plus-depth. Finally, Ferreira et al. [11] proposed a method which uses the depth and temporal information for automatic detection of 3D video shots from the 3D video sequence that uses a K-means clustering algorithm to locate the boundaries. This algorithm has the advantage of not using any explicit thresholds or training procedure. A common problem with the 2D video SBD methods described in the literature is the lack of common comparison grounds, as few works use the same dataset to test the methods proposed and evaluate their performance. This is a serious problem as it limits the number of comparisons that can be made to compare the different SBD methods. For the 3D case, the lack of comparative analyses is even more severe, due to the reduced number of SBD methods developed so far specifically for this type of visual information. The few works that have been proposed for SBD in 3D video usually use the Recall and Precision rate to evaluate performance, but the lack of benchmark 3D video sequences with ground-truth shot segmentations severely limit the number and types of performance evaluations that can be made. As mentioned above, the evaluation metrics presented in Section 1.4.2 are based on comparison between manual and computed segmentation. Therefore, besides being very important to have common test datasets, the development of universal and objective measures, which are specific for SBD and can be applied in different content domains and 3D video formats is highly recommended and desired. 3D key-frame extraction In this section, we briefly introduce the main concepts behind key-frame extraction methods for 2D video and describe key-frame extraction methods for 3D video. The key-frame extraction methods under review are grouped into seven categories: non-optimised, clustering, minimum correlation, minimum reconstruction error (MRE), curve simplification, matrix factorisation and other methods. Non-optimised methods The simplest method for 3D key-frame summarisation is uniform sampling (UnS). This method selects key-frames at regular time-intervals (see Fig. 4 a), e.g. selecting one video frame every minute to be a key-frame. This will result in a set of key-frames evenly distributed throughout the video. However, the selected key-frames might not contain meaningful or pertinent visual content or there may be two or more similar key-frames. For instance, the selected key-frame might show a bad image (e.g. unfocused) or no key-frame exists for some video shots, thus failing to effectively represent the video content. a UnS method: uniform sampling at equal intervals. b PoS method: selecting the first frame of each video shot Another simple and computationally efficient frame selection method is position sampling (PoS). In PoS, once the boundaries of a video shot are detected, the method selects frames according to their position in the video shot, and e.g. the first, or the last or the middle frame of the video shot (see Fig. 4 b) can be chosen as key-frames. Thus, the size of key-frame summary corresponds to the number of video shots of the entire video. In some summarisation applications, one key-frame per video shot is not enough, and the PoS method can be adapted to this need by allowing the selection of multiple frames at fixed positions within the video shot. For 3D video, UnS and PoS are used mostly as references for comparisons with other methods, as in [18, 41, 42]. Ionescu et al. [36] selected as key-frames the frames in the middle of the video shot to reduce temporal redundancy and computation cost of his animation movies summarisation method. Yanwei et al. [16] used the middle frame of each video skim segment to represent this summary in a storyboard. In general the above non-optimised methods may be used in both 2D and 3D video with minor adaptations. Clustering can be used to partition a large set of data into groups minimising intra-group variability and maximising inter-group separation. After partitioning, the data in each cluster have similar features. The partitioning can be based on the similarities or distances between the data points where each datum represents a vector of features of a frame. These points are grouped into clusters based on feature similarity and one or more points from each cluster are selected to represent the cluster, usually the points closest to the cluster centre. The representative points of the clusters can be used as key-frames for the entire video sequence. A significant number of clustering methods reported in the literature use colour histograms as the descriptive features, and the clustering is performed using distance functions such as Euclidean distances or histogram intersection measures. These methods are very popular due to its good clustering robustness and the simplicity inherent to computing colour histograms [43, 44]. Other features can also be used in clustering-based methods. For example, Ferreira et al. in [11] used temporal and depth features with a clustering algorithm to segment 3D video sequences into 3D video shots. K-means is one of the simplest algorithms used to solve the clustering problem. This clustering algorithm can be applied to extract key-frames from short video sequences or shots, but its application to longer video sequences must be done with care taking into account the large processing time and memory requirements of the algorithm. To reduce the number of frames used by the clustering algorithm some authors pre-sample the original video, as proposed in [24]. The quality of the summaries may not be affected by this operation but the sampling rate must be chosen carefully. Although K-means is a popular and well-known clustering algorithm it has some limitations such as the need to pre-establish the number of clusters desired priori and the fact that the sequential order of the key-frames may not be preserved. Huang et al. [18] used the K-means clustering algorithm for extracting a set of 3D key-frames to be compared with the output of their key-frame extraction method. Curve simplification In the curve simplification method, each frame of the video sequence can be treated as a point in multidimensional feature space. The points are then connected in sequential order through an interpolating trajectory curve. The method then searches for a set of points which best represent the curve shape. Binary curve splitting algorithm [45] and discrete contour evolution [46, 47] are two curve simplification algorithms used in the key-frame extraction methods. Curve simplification-based algorithms preserve the sequential information of the video sequence during key-frame extraction; however, the search for the best representation curve has high computational complexity. The curve simplification method proposed in [48] was used by Huang et al. [18] in the evaluation process of the 3D key-frame extraction method they proposed. Minimum correlation Minimum correlation based methods extract a set key-frames such that the inter-key-frame correlation is minimal, i.e. it extracts the key-frames that are more dissimilar from each other. Formally, the optimal key-frame extraction based on minimum correlation can be defined as $$ K=\arg\min_{l_{0},l_{1},..,l_{n-1}} \text{Corr}\left(f_{l_{0}},f_{l_{1}}, \ldots,f_{l_{n-1}}\right) $$ where Corr(.) is a correlation measure, n is the frame number of original sequence F, l i is the frame index in F and K is the set of resulting key-frames with m frames, \(K=\{f_{l_{0}},f_{l_{1}}, \ldots,f_{l_{m-1}}\}\). Different algorithms can be used to find the optimal solution, such as logarithmic and stochastic search or a genetic algorithm [4]. The key-frame extraction method for stereoscopic video based on minimum correlation has been presented first by Doulamis et al. in [10]. Here, colour and depth information are combined to summarise stereo video sequences. After the segmentation of the entire video sequence, a shot feature vector is constructed based on size, location, colour and depth of each shot. To limit the number of shot candidates, a shot selection method based on similarly between shots is applied. Finally, the stereo key-frames are extracted from each of the most representative shots. The extraction is achieved by minimising a cross correlation criterion and uses a genetic algorithm [49]. Since this approach selects frames by minimisation of cross correlation, they are not similar to each other in terms of colour and depth. Minimum reconstruction error In MRE based methods, the extraction of the key-frames is based on the minimisation of the difference between the original video sequence/shot and the sequence reconstructed from the key-frames. A frame interpolation function \(\mathcal {I}(t,K)\) is used to compute the frame at time t, of the reconstructed sequence, from a set of key-frames K. The frame-copy method can be used to reconstruct the video sequence/shot (i.e. performing zero-order interpolation), but more sophisticated methods like motion-compensated interpolation might be used as proposed in [50]. The reconstruction error \(\mathcal {E}(\textbf {F},K)\) is defined as $$ \mathcal{E}\left(\mathbf{F},K\right)=\frac{1}{n}\sum_{i=0}^{n-1}d\left(f_{i},\mathcal{I}(i,K)\right) $$ where d(.) is the difference between two frames, F is video sequence/shot with n frames, F={f 0,f 1,…,f n−1}, where f i is the ith frame. The key-frame ratio R(K) defines the ratio between the number of frames in the set K, m and the total number of frames in video sequence/shot F, n, i.e. R(K)=m/n. Given a key-frame ratio constraint R m , the optimum set of key-frames K ∗ is the one that minimises the reconstruction error, i.e. $$ K^{*}=\arg\min_{K\in \mathbf{F}} \mathcal{E}(\mathbf{F},K) \quad s.t.\; R(K)\leq R_{m} $$ Thus, the MRE is defined by $$ MRE=\mathcal{E}(\mathbf{F},K^{*}) $$ For example, given a shot F with n=10 frames and a key-frame ratio R(K)=0.2, this algorithm extracts at most 2 frames as key-frames, i.e. m=2. Xu et al. in [19] presented a key-frame extraction method to summarise sequences of 3D mesh models, wherein the number and location of key-frames are found through a rate-distortion optimisation process. As in all shot-based methods, in Xu's method shot detection is performed before key-frame extraction. Here, the SBD is based on the motion activity of a human body in dancing and sports videos. The motion activity is measured by the Euclidean distance between feature vectors of neighbouring 3D frames. The feature vectors are derived from three histograms (one for each spherical coordinate r, θ and ϕ) of all vertices of the 3D frames. Before the computation of spherical histograms, the Cartesian coordinates of vertices are transformed to the spherical coordinates. One of the three histograms is computed by splitting the range of the data in equal size bins. Then, the number of points from the data set that fall into each bin is counted. After shot detection, the key-frames are extracted in each shot. The key-frame extraction method is based on a rate-distortion trade-off expressed by a Lagrangian cost function, cost(Shot k ) =Distortion(Shot k ) + λRate(Shot k ) where Rate is the number of key-frames in a shot and Distortion is the Euclidean distance between feature vectors. Huang et al. [51] also presented a key-frame extraction method for 3D video based on rate-distortion optimisation, where Rate and Distortion definitions are similar to those used in [19]. However this method is not based on shot identification, since it produces 3D key-frame summaries without requiring prior video shot detection. The key-frame summary sought should minimise a Conciseness cost function, which is a weighted sum of the Rate and Distortion functions defined in the work. A graph-based method for extracting the key-frames is used, such that the key-frames selection is based on the shortest path in the graph that is constructed from a self-similarity map. The spherical histogram of the 3D frames is used to compute the self-similarity map. More recently, Ferreira et al. [42] proposed a shot-based key-frame extraction method based on rate-distortion optimisation for 2D and 3D video. For each video-shot, a corresponding set of key-frames is chosen via dynamic programming by minimising the distortion between the original video shot and the one reconstructed from the set of key-frames. The distortion metric comprises not only information about frame difference, but also the visual relevance of different image regions as estimated by and aggregated saliency map, which combines three saliency feature maps computed from spatial, temporal and depth information. Matrix factorisation Another class of methods use matrix factorisation techniques to extract frames from a video sequence. Matrix factorisation (MF) techniques are based on approximating a high dimension matrix A (original data) by a product of two or more lower dimension matrices. The A matrix can be composed of different features of the video or image, e.g. Gong and Liu [52] used the colour histograms to represent video frames while, Cooper et al. [53] computed the MF of the similarity matrix into essential structural components (lower dimension matrices). In addition to dimension reduction, the MF techniques allow reducing significantly the processing time and memory used during the operation. The MF techniques found in these key-frame extraction methods include singular value decomposition (SVD) and non-negative matrix factorisation. Gong and Liu [52] proposed a key-frame extraction method based on SVD. To reduce the number of frames to be computed before the SVD, only a subset is taken from the input video at a pre-defined sample rate. Then, colour histograms (RGB) are used to create a frame-feature matrix A of the pre-selected frames. Next, the SVD is performed on matrix A to obtain an orthonormal matrix V in which each column vector represents one frame in the defined feature space. Then a set of key-frames are identified by clustering the projected coefficients. According to user's request, the output can be a set of key-frames (one of each cluster) or a video skim with a user specified time duration. To construct the set of key-frames, the frames that are closest to the centres of the clusters are selected as key-frames. Non-negative similarity matrix factorisation based on low-order discrete cosine transforms [53] and sliding-window SVD [54] are other approaches for key-frame extraction based on matrix factorisation. In [18], Huang et al. proposed a method to be used with 3D video to represent an animation sequence with a set of key-frames. Given an animation sequence with n frames and m vertices of a surface in each frame, an n×m matrix A is built with the vertices coordinates. This matrix A is then approximately factorised into a weight n×k matrix W and a key-frame k×m matrix H, where k is the predefined number of key-frames. As k is selected to be smaller than n and m, this decomposition results in compact version of the original data A≈W H. An iterative least square minimisation procedure is used to compute the weights and extract the key-frames. This procedure is driven by user-defined parameters such as a number of key-frames and an error threshold. Lee et al. [22] introduced a deformation-driven genetic algorithm to search good representative animation key-frames. Once the key-frames are extracted, similar to [18], the animation is reconstructed by a linear combination of the extracted key-frames for better approximation. To evaluate the performance of the proposed method, the authors compare it with Huang's method proposed in [18]. The methods described in this section could not be easily classified into the preceding categories, mostly on account of the diversity of approaches followed in solving the key-frame extraction problem. As such, and given their importance, they are described all together in this section. Assa et al. proposed a method to create an action synopsis image composed of key poses (human body motion) based on the analysis through motion curve. The method integrates several key-frames into a single still image or a small number of images to illustrate the action. Currently, it is applied in 3D animation sequences and 2D video as documented in [17]. Lee et al. [25] proposed a method to select key-frames from 3D animation video using the depth information of the animation. The extracted key-frames are used to compose a single image summary. The entire video sequence is divided into temporal segments based on the motion of the slowest moving objects, and then a summarisation method is applied to the segments. The depth information and the respective gradient (computed with depth values of each frame) is used to compute the importance of each frame. A single image summary composed of several foreground visual objects is built based on the importance of each frame. The authors proposed a threshold based approach to control the visual complexity (number of foreground objects) of the single image summary (one for each video sequence), as it is showed in Fig. 5. By using this approach, the number of video frames to be analysed is reduced, but in some cases the method can miss important information contained in the temporal segments. Single image key-frame presentation method [25] Jin et al. [41] proposed a key-frame extraction method for animation sequences (skeletal and mesh animations). The method uses animation saliency computed from the original data to aid the selection of the key-frames that can be used to reconstruct the original animation with smaller error. Usually, an animation sequence is characterised by a large amount of information. For computational efficiency, the animation sequence is projected to a lower-dimensional space where all frames of the sequence are represented as points of curves defined in the new lower-dimensional space. Then, the curves in the lower-dimensional space are sampled and these sampled points are used to compute the Gaussian curvature values. Next, the points with the largest curvature value are selected as candidate key-frames. Finally, a key-frame refinement method is employed to minimise an error function which incorporates visual saliency information. The aim of a visual saliency is to identify the regions of an image which attract higher human visual attention. Lee Huang et al. [55] expanded this idea to 3D video and computed mesh saliency for use in a mesh simplification algorithm that preserves much information of the original input. More recently, visual saliency has also been used in 3D key-frame extraction, in the method proposed by Ferreira et al. in [42]. Yanwei et al. [16] proposed a multiview summarisation method for non-synchronised views, including 4 of them covering 360°, which results in small inter-view correlation, thus more difficult to compute similarity measures. In this method, each view is segmented into video shots and general solution combines features of different shots and uses a graph model for the correlations between shots. Due to the correlation among multi-view shots, the graph has complicated connectivity, which makes summarisation very challenging. For that purpose, random walks are used to do shot clustering and then the final summary is generated by a multi-objective optimisation process based on different user requirements, such as the number of shots, summary length and information coverage. The output of Yanwei's method is a multiview storyboard, condensing spatial and temporal information. The problem of key-frame extraction for 3D video has been presented first by Doulamis et al. in [10] who proposed a method combining colour and depth information to summarise stereo video sequences. Papachristou et al. in [13] developed a video shot classification framework for stereoscopic video, in which the key-frame extraction method used is based on mutual information. Even though the framework was proposed for stereoscopic video, the key-frame extraction method only uses one view of the stereoscopic video. Until now, only some specific 3D video formats were considered by the existing key-frame extraction methods. Stereoscopic video was used in [10, 42], V+D is used by Ferreira et al. in [42] and 3D computer graphics format in [17–19, 22, 25, 51]. Thus, further room exists for research on efficient key-frame extraction methods that can be applied to other 3D video formats, such as MVV, MVD and holoscopic video. Most 3D key-frame extraction methods cited in this paper were developed for specific content and 3D format and only four of them include comparisons with similar methods [18, 22, 41, 51]. In [18], curve simplification, UnS and clustering methods were utilised as reference methods for performance evaluation and comparison of the proposed matrix factorisation methods. The authors showed that the method based on matrix factorisation extracts more representative key-frames in comparison with the other three competing methods [22, 41, 51]. However, the algorithm is very slow with quadratic running time complexity. In [22], the proposed method based on genetic algorithm is compared with Huang's method [18] in terms of the PSNR and computational complexity. The former is very efficient in terms of computation time when compared to the latter but qualitywise (average PSNR) it is slightly worse. However, Huang's method [18] is slightly better when comparing maximum and minimum PSNR. Peng Huang et al. in [51] confront their key-frame extraction method with the method used in [19] and the results show improved performance for all 3D video sequences used. Jin et al. in [41] compare the proposed method with the UnS and Principal Component Analysis methods [56]. The results show that the proposed method achieves much better reconstruction of skeletal and mesh animation than the other methods under analysis. As mentioned before, most of the key-frame extraction methods for 3D video, rely on a previous SBD step. However, the methods just described, from [18, 22, 41, 51], do not perform any pre-analysis of the video signal to identify shots and their boundaries. The quality of key-frame summaries obtained using these approaches can be negatively affected when accurate shot segmentation is not available. Another important issue is the definition of the number of key-frames need to represent the original sequence. This number depends on user requirements and on the content of the video to be summarised and its choice frequently involves a trade-off between the quality and efficiency of the key-frame summary. Key-frame presentation Once the key-frames are extracted, they need to be presented in an organised manner to facilitate video browsing and navigation operations by the user. The video presentation methods aim to show the key-frames in some meaningful way allowing the user to grasp the content of a video without watching it from beginning to end [4]. The most common methods for key-frame presentation are the static storyboard, dynamic slideshow and single image, see Fig. 1. Static storyboard presents a set of miniaturised key-frames spatially tiled in chronological order, allowing a quick browsing and viewing of the original video sequence. This presentation method was used with 3D video in [10, 18, 19, 22, 41, 51]. The second method is the dynamic slideshow that presents the key-frames one by one on the screen, which allows browsing over the whole video sequence. Other presentation method is the single image, which morphs parts of different key-frames in chronological to produce a single image. Normally, in this presentation type the background and foreground objects (time shifted) are aggregated in single image, as exemplified in Fig. 6. In this figure, the foreground is the children who plays in bars of a playground. Here, we can see 3 positions of the children in the bars which corresponds to 3 key-frames of video sequence. Video synopsis proposed [43] Qing et al. [12] proposed a generic method for extracting key-frames in which the Jensen-Shannon divergence is used to measure the difference between video frames to segment the video into shots and to choose key-frames in each shot. The authors also proposed a 3D visualisation tool, used to display key-frames and the useful information related to the process of key-frame selection. More recently, Nguyen et al. [57] proposed the Video Summagator. This method provides a 3D visualisation of a video cube of static and dynamic video summaries. Assa et al. proposed a method to create an action synopsis image from a 3D animation sequence or 2D video [17]. Lee et al. also proposed a method to summarise a 3D animation into a single image based on depth information [25]. In [58] a 3D interface (3D-Ring and 3D-Globe) was proposed as an alternative to the 2D grid presentation for interactive item-search in visual content databases, see Fig. 7. Even though this system was designed to be used with a large database it can also be applied to visualise key-frames summaries of 2D and 3D video. a 3D-Ring interface, b 3D-Globe interface and c 2D grid presentation (figures based on [58]) Most of the 3D key-frame extraction methods proposed in the literature until now are focused on the extraction rather than in the presentation of key-frame sets to the viewers. So far only Assa et al. and Lee et al. proposed in [17] and [25] two presentation solutions distinct from the static storyboard used in association with most of 3D key-frame extraction methods [10, 18, 19, 22, 41, 51]. In this scenario, with only two presentation solutions, it is foreseeable that the development of new 3D video and image display devices will lead to the creation of new methods to display 3D video summaries or key-frame collages providing the user with more immersive and more meaningful ways to observe these types of time-condensed video representations. Quality evaluation of 3D key-frame summaries One of the most important topics in video summarisation algorithmic development is the evaluation of the key-frame extraction methods. In this section, we present current key-frame summary evaluation methods and some related issues. These methods are aggregated into three groups: result description, subjective and objective methods, as it was proposed in [4]. Result description This is the most common and simple form of evaluation key-frame extraction methods since it does not require a reference, either for objective or subjective comparison with other methods. Usually, it is used to explain and describe the advantages of some method compared with others based on presentation or/and description of the key-frames extracted (visual comparison), as in [18, 19, 22, 25, 41, 51]. This type of evaluation can also be used to discuss the influence of specific parameters or features of the method and also the influence of the content in the key-frame set, as in [10, 19]. In some works, this type of evaluation method is complemented with objective and/or subjective methods as in [19, 25]. However, the result description method has some limitations, such as the reduced number of methods which can be compared at same time, i.e. it is inadequate to compare key-frame summaries of a large number of video sequences or methods. Another drawback is the subjectivity inherent to this type of evaluation, since the underling comparisons results are usually user-dependent and so prone to inter and intra observer fluctuations. Subjective methods Subjective methods rely on the independent opinion of a panel of users judging the quality of the generated key-frame video summaries according to a known methodology. In this type evaluation, a panel of viewers are asked to observe both the summaries and the original sequence and then respond to questions related to some evaluation criteria, (e.g. 'Was the key-frame summary useful?', 'Was the key-frame summary coherent?') or if each key-frame is 'good', 'fair', or 'poor' according to the original video sequence. The experiments can include a set of absolute evaluations and/or a set of relative evaluations, in which two key-frame summaries are presented and compared. Usually, the summary visualisation and rating steps are repeated for each video in the evaluation set by each viewer. During the evaluation of the key-frame summaries, it is also required taking into account the external factors which can influence the ratings of the summaries, such as the attention and fatigue specially when there are long evaluation sessions with many video summaries. In addition to these factors, the experiments must follow standard recommended protocols prepared specifically for subjective assessment of video quality [59]. Subjective evaluation methods were used in [16, 60–63]. In [60], subjective assessment was used to grade the single key-frame representations as 'good', 'bad' or 'neutral' for each video shot and also give appreciations on the number of key-frames with possible grades being 'good', 'too many' and 'too few' in the case of multiple key-frames per shot. In [61, 63], the quality of the key-frame summary is evaluated by asking users to give a mark between 0 to 100 for three criteria, 'informativeness', 'enjoyability' and 'rank' after watching the original sequences and the respective key-frames summaries. Ejaz et al. [62] used subjective evaluations to compare the proposed method with four prominent key-frame extraction methods: open video project (OV) [45], Delaunay triangulation (DT) [64], STIll and MOving Video Storyboard (STIMO) [65] and Video SUMMarisation (VSUMM) [24]. In this case, the evaluation is based on mean opinion scores (MOS) and viewers are asked to rate the quality of the key-frame summary on scale of 0 (minimum value) to 5 (maximum value) after watching the original sequences and the respective summaries generated by all the methods. In [16] subjective assessments were also used to evaluate multiview video summaries. The aim is to grade the 'enjoyability', 'informativeness' and 'usefulness' of the video summary. Here, three questions were put to the viewer to evaluate the method: Q 1: 'How about the enjoyability of the video summary?' Q 2: 'Do you think the information encoded in the summary is reliable compared to the original multiview videos' and Q 3: 'Will you prefer the summary to original multiview videos if stored in your computer?'. In reply to the questions Q 1 and Q 2, the viewers assigned a score between 0 (minimum value) to 5 (maximum value) and for Q 3 the viewers only need to respond with 'yes' or 'no'. From all 3D key-frame extraction methods reviewed, only [16, 17, 25] used subjective evaluations. Objective methods Although subjective evaluation provides a better representation of the human perception than objective evaluation, it is not suitable for practical implementations due to the time required to conduct the opinion collection campaigns. Objective evaluation methods are reproducible and can be specified analytically, and since they are automatable can be used to rate the proposed method on large number of videos of variable genres and formats. These methods can be applied to all types of video formats without requiring the services of video experts and can be performed rapidly and automatically if suitable quality measures are available. Besides being faster, simpler and easily replicable, this type of method is more economical than the subjective evaluation. The works reviewed in this article, which use objective quality evaluation, employ several quality measures originally developed for 2D video, but can be also applied to 3D video, after being modified to take into account the specific features of 3D visual information. The shot reconstruction degree (SRD) distortion measure [66] and the fidelity measure (Fm) defined in [67] follow two different approaches. Fidelity measure employs a global strategy, while SRD uses a local evaluation of the key-frames. To judge the conciseness of a key-frame summary a measure of the Compression Ratio (CR) is used [68]. If a ground-truth summary is available the Comparison of User Summaries (CUS) [24], Recall rate, Precision rate and accuracy measure (F1) measures can be used. These measures compare the computed summaries with those manually built by users. More details on these measures are presented in the next sub-sections. Shot reconstruction degree: SRD measures the capability of a set of key-frames to represent the original video sequence/shot. Assuming a video shot F={f 0,f 1,…,f n−1} of n frames and \(K=\{f_{l_{0}},f_{l_{1}}, \ldots,f_{l_{m-1}}\}\) a set of m key-frames selected from F, the reconstructed scene shot F ′={f0′,f1′,…,f n−1′} is obtained from the K set by using some type of frame interpolation. The SRD measure is defined as $$ \text{SRD}(\mathbf{F},F')=\frac{1}{n}\sum_{k=0}^{n-1}\mathcal{S}im\left(f_{k},f^{'}_{k}\right) $$ where n is the size of the original video sequence/shot F and \(\mathcal {S}im(.)\) is the similarity between two video frames. In Liu et al. [66], the similarity measure chosen was peak signal-to-noise ratio (PSNR), but other similarity metrics that include 3D features can also be used in the evaluation of 3D key-frame summaries. A K key-frame summary is a good representation of the original F when the magnitude of its SRD is high. Fm is computed as the maximum of the minimal distances between the set of key-frames K and each frame of the original F, i.e. a Semi-Hausdorff distance d sh . Let F be a video sequence/shot containing n frames, and the set \(K=\{f_{l_{0}},f_{l_{1}}, \ldots,f_{l_{m-1}}\}\) of m frames, selected from F. The distance between the set K and a generic frame f k s.t. 0≤k≤n−1 belonging to F can be calculated as follows. $$ d_{\text{min}}(f_{k},K)=\min_{j}\left\{d\left(f_{k},f_{l_{j}}\right)\right\}; j=0,1,\ldots,m-1 $$ Then the semi-Hausdorff distance d sh between K and F is defined as $$ d_{sh}(\textbf{F},K)=\max_{k}\{d_{\text{min}}(f_{k},K)\};k=0,1,\ldots,n-1 $$ The Fidelity measure is defined as $$ Fm(\textbf{F},K)=\text{MaxDiff} - d_{sh}(\textbf{F},K) $$ where MaxDiff is the largest possible value that the frame difference measure can assume. The function d(f a ,f b ) measures the difference between two video frames a and b. The majority of the existing dissimilarity measures can be used for d(,), such as the L 1-norm (city block distance), L 2-norm (Euclidean distance) and L n -norm [67]. As it was mentioned before, the Fm measure can be used for 3D video with the necessary changes in the d(,) distance. Whenever Fm is high, this means that the selected key-frames provide an accurate representation of the whole F. A video summary should not contain too many key-frames since the aim of the summarisation process is to allow viewers to quickly grasp the content of a video sequence. For this reason it is important to quantify the conciseness of the key-frame summary. The conciseness is the length of the key-frame video summary in relation to the original video segment length and can be measured a compression ratio, defined as the relative amount of 'savings' provided by the summary representation: $$ CR(\textbf{F})=1-\frac{m}{n} $$ where m and n are the number of frames in the key-frame set K and the original video sequence F respectively. Generally, high compression ratio is desirable for a compact video summary [68]. Comparison of user summaries (CUS): CUS is a quantitative measure based on the comparison of summaries built manually by users and computed summaries. It was proposed by Avila et al. in [24]. The user summaries are taken as reference, i.e. the ground-truth, and the comparison between the summaries is based on specific metrics. The colour histogram is used for comparing key-frames from different video summaries, while the distance between them is measured using the Manhattan distance. Two key-frames are similar if the Manhattan distance of their colour histograms is below a predetermined threshold δ. In [24], this threshold value was set to 0.5. Two evaluation metrics, accuracy rate CUS A and error rate CUS E , are used to measure the quality of the computed summaries. They are defined as follows: $$ \text{CUS}_{A}=\frac{n_{\text{match}}}{n_{\text{US}}}\,\,\, \text{CUS}_{E}=\frac{n_{\text{no-match}}}{n_{\text{US}}} $$ where n match and n no-match are, respectively, the number of matching and non-matching key-frames between the computed and the user generated summary and n US is the total number of key-frames in the summary. CUS A varies between 0 and 1, where CUS A =0 is the worst value indicating that none of the key-frames from the computed summary matches those of the user summary. A value of CUS A =1 is the best case and indicates that all key-frames from both summaries perfectly match each other. A null value for CUS E indicates a perfect match between both summaries. Computational complexity: Another relevant performance metric taken into account in the evaluation of key-frame extraction methods is the computational complexity, which is usually equated with the time spent to construct a key-frame summary. This metric was used in [24, 62, 63, 68, 69] for 2D video summaries. In 3D key-frame extraction methods, the computational complexity metric is only used by Lee et al. in [22], where the computational complexity of Lee's and Huang's et al. [18] methods are compared. Other methods: Other methods and measures were used for objective evaluation of 3D key-frames summaries. In [19, 51] a rate-distortion curve is used, modelling a monotonic relationship between rate and distortion, with increases of the former leading to decreases of the latter. In the case of [18], the root mean square error (RMSE) distance between the original and reconstructed animation was used as the objective quality measure (with an inverse relationship in this case). This measurement is the same as in [70] and [71]. Lee et al. [22] used PSNR to measure the reconstruction distortion. Jin et al. in [41] measure reconstruction error of the animation from the extracted key-frames, using average of degrees of freedom (DOF) of reconstruction error magnitude. Conciseness, coverage, context and coherence are desirable attributes in any key-frame summary. Some of these attributes are mostly subjective as is the case of the context and coherence. Conciseness is related to the length of the key-summary, while the coverage evaluation is based on comparison between computed key-frames summary and ground-truth summary, expressed by the recall rate, precision rate, CUS A and CUS E . Most evaluation metrics reviewed above were developed for 2D video. However, some of them, such as Fm and SRD, have also been extended to evaluate 3D video summaries after some adaptation. This is the casse of the 3D key-frame extraction method presented by Ferreira et al. in [42], where the Fm and SRD metrics were used. To measure the Recall rate, Precision rate, CUS A , CUS E , computational complexity and compression ratio in 3D video summarisation, no adaptation is needed. The key-frame extraction methods are often application-dependent (e.g. summarisation of sports videos, news, home movies, entertainment videos and more recently for 3D animation) and the evaluation metrics must be adapted to the intended use. A good summary quality evaluation framework must be based on a hybrid evaluation scheme which includes the strengths of subjective and objective methods and also the advantages of result description evaluations. In this section, some applications of 3D key-frame extraction methods and some aspects related to these applications are presented. These applications are grouped into five categories: video browsing, video retrieval, content description, animation synthesis and others. Video browsing The video browsing and associated problem has been investigated by the research community for decades, [72]. However, the growing use of 3D video and the specific characteristics of this type of visual information make 3D video browsing a more interesting and challenging problem. The access to databases or other collection of videos could be eased by the use the key-frame extraction methods to abstract/resume long video sequences in the repository of interest. With this kind of abridged video representation, a viewer can quickly find the desired video in a large database. For example, once an interesting topic has been identified through display of the key-frames, a simple operation as a click on the respective key-frame can initiate video playback of the original content at that particular instant. Many video browsing methods have been proposed for 2D video [72]. However, to the best of the authors' knowledge, in the case of 3D video there are no works reported in the literature. Video retrieval In contrast to video browsing, where viewers often just browse interactively through video summaries in order to explore their content, in video retrieval the viewers search for certain visual objects (e.g. objects, people and scenes) in a video database. In this type of retrieval processes, viewers are typically expected to know exactly what they are looking for. Therefore, it is crucial to implement appropriate search mechanisms for different types of queries provided by distinct viewers and with particular interests. The matching between the viewers' interests (queries) and the database content can be made with recourse to textual or image based descriptions or combinations of both. Some 2D video search and retrieval applications have combined video browsing and retrieval in the same platform [72]. In the case of 3D video this problem is still open for research. Finally, it is worth to point out that work done on 3D object recognition techniques which can also be used in retrieval, as published in [73–75]. Vretos et al. [76] presented a way of using the audio-visual description profile (AVDP) of the MPEG-7 standard for 3D video content description. The description of key-frames is contemplated in the AVDP profile through the MediaSourceDecompositionDS (i.e. MediaSourceDecompositionDS is used in the AVDP context to decompose an audiovisual segment into the constituent audio and video channels). Thus this content description scheme, allows that 3D key-frames can be used for fast browsing and condensed representation of query results of 3D video search tasks. Other application of key-frames to content description was proposed by Sano et al. [77]. Here, the authors proposed and discussed how the AVDP profile of the MPEG-7 can be applied to multiview 3D video content [56]. Animation synthesis Blanz et al. [78] proposed a morphable 3D face model by transforming the shape and texture of example into a new 3D model representation. According to this modelling approach, new or similar faces and expressions can be created by forming linear combinations of the 3D face models. A similar concept to the proposed in [78] can be applied to generate 3D models [79] or to synthesise new motion from captured motion data [80]. Animation synthesis based on key-frames [81] using the same concept has been presented in [78–80], to interpolate frames between two key-frames. However, the quality of the interpolated frames is dependent on the inter-key-frame distance and on the interpolation method used. Assa et al. [17] proposed the use of action synopsis images as icons (personal computer desktop and folders) and thumbnails of the 3D animation. Assa et al. also proposed an automatic or semi-automatic generation method to create comic strips and storyboards for 3D animation. Lee et al. [25] presented a method to create a single image summary of a 2D or 3D animation, which can be used in the same application as Assa's work. Halit et al. [56] proposed a tool for thumbnail generation from motion animation sequences. Several authors, as [82–85] have used key-frame extraction methods in the 2D-to-3D video conversion. Prospects and challenges Although some significant work has been done in the 3D video summarisation domain, many issues are still open and deserve further research, especially in the following areas. SBD and key-frame extraction methods The selection of the features used by shot boundary and key-frame extraction methods is still an open research problem, because these features depend on the application, video content and representation format. For instance, in fast-motion scenes edge information is not the best choice to detect shot boundaries due to motion-induced blur. Thus, it may be better to automatically find the useful features based on some assumptions about the video-content. The majority of key-frame extraction methods published in the literature use low-level features and content sampling approaches to identify the relevant frames that should be included in the key-frame summary. Recently, the inclusion of perceptual metrics in the SBD and key-frame methods are gaining some space. Recently and in the context of 2D video, some key-frame extraction methods based on visual attention models have emerged as, [60–63, 86]. However, for 3D video only two solutions are available [41, 42]. Hence, key-frame extraction in 3D video still poses relevant research problems to be investigated and efficiently solved. Another open challenge is the combination of the visual features with additional information, such as audio features, text captions and content description, for use in shot boundary detection and selection of the optimal frames in 3D video. In the current literature, there is also a lack of summarisation methods based on key-frames or video skims, for the most recent 3D video formats such as MVD and plenoptic video. Another topic open to further research is the application of scalable summarisation to 3D formats [87]. Despite the fact that several previous works addressed scalable summarisation for 2D video, e.g. [88, 89], such methods were not extended to 3D and multiview, which leads to open research questions. In the past evaluation frameworks for 2D key-frame summarisation methods were proposed in [90, 91]. More recently, Avila et al. [24] also proposed another evaluation setup, wherein the original video and the key-frame summaries of several methods are available for downloading, together with the results of several key-frame extraction methods for 2D video. Unfortunately for the case of 3D video, there is not as yet any similar framework, where key-frame summaries and the respective original sequences are available for research use. The number and diversity of evaluation metrics (objective, result description and subjective) used to compare state-of-the-art key-frame extraction methods make their comparative assessment a difficult task. Therefore, the development of metrics which can be used in the evaluation of key-frames summaries in different domains and 3D video formats is a very important area of video-summarisation related research. Furthermore the focus of the evaluation process must be application-dependent. For instance, in browsing applications, the time spent by the user to search or browse for a particular video is the most important factor, but on the other hand, in detection events, the evaluation metric must focus on the successful detection of these events. One other problem that arises in the evaluation process is the replication of results of previous works, as some works are not described with enough details to allow independent implementation or the input data is unavailable or else it is not easy to use due to data format incompatibilities or lack of information about their representation format. Thus, the best way to test and compare key-frame extraction methods for 2D and 3D is to build publicly accessible repositories containing test kits, made up of executable or web-executable versions of the methods and the test sequences. Another challenging topic in the research of 3D key-frame summarisation is the design of an efficient and intuitive visualisation interface that allows easy navigation and visualisation of the key-frame summaries. These applications should be independent of the terminal capabilities (display dimension, processing and battery power), i.e. should be usable on small screen devices such as smartphones as well as on ultra-high-definition displays. In addition, the visualisation interface should be independent from the key-frame summarisation method, to allow the visualisation of different formats of 3D key-frames video summaries, such as stereoscopic video or video-plus-depth and also 2D video in the same framework. The interface should be capable of dealing with the most common key-frame visualisation methods such as, static storyboard, dynamic slideshow and hierarchically arranged viewing. In particular, the most recent 3D interface for searching and viewing images or video in large databases, 3D-Ring and 3D-Globe, are interesting solutions which must be taken into account in the definition of new key-frame visualisation methods [58]. Video summary coding In the past, the problem of scalable coding of video summaries was addressed in [88, 92–94]. In [92] the authors propose a hierarchical frame selection scheme which considers semantic relevance in video sequences at different levels computed from compressed wavelet-based scalable video. In [93], a method to generate video summaries from scalable video streams based on motion information is presented; while in [94], the authors propose to partition a video summary into summarisation units related by the prediction structure and independently decodable. Ferreira et al. in [88] proposed a method to encode an arbitrary video summary using dynamic GOP structures in scalable streams. The scalable stream obtained was fully compatible with the scalable extension of the H.264/AVC standard. However, all approaches were proposed for 2D video and used older generation video coding methods. The application of video summary coding to the 3D video format and the use of the most recent video coding, such as HEVC, should also be explored to find efficient coding tools for such purpose. In this paper, we have presented a review of 3D key-frame extraction methods covering the major results published in recent journal issues and conference proceedings. Different state-of-the-art methods for key-frame extraction and evaluation metrics were presented and examined. The most important presentation methods for key-frame summaries were also discussed. Various suggestions for the development of future 3D video summarisation methods are made, particularly oriented for future research on 3D key-frame extraction methods and potential benefits that may be attained from further research based on visual attention models. So far, 3D video key-frame extraction methods based on visual attention have not been deeply researched, so this is an interesting point to be explored. More research effort should also be put on methods for performance evaluation of key-frame extraction algorithms. The current plethora of different objective and subjective evaluation methods, most of them not easily comparable between each other, motivates a research goal towards unified and comparable methods for performance evaluation and benchmarking of 3D video summaries. One other important and interesting research topic is the design and implementation of methods and tools to present 3D key-frame summaries. It is clear that the way a key-frame set is presented to viewers influence the time and effort they have to devote to interpret the summarised visual data. Finally, efficient coding of video summaries also leads to research problems which are still open for further research, since no specific solutions for 3D video are currently available. A Smolic, K Mueller, N Stefanoski, J Ostermann, A Gotchev, GB Akar, G Triantafyllidis, A Koz, Coding algorithms for 3DTV—a survey. Circ Syst Video Technol IEEE Trans. 17(11), 1606–1621 (2007). P Merkle, K Müller, T Wiegand, 3D video: acquisition, coding, and display. Consum Electron Transac. 56(2), 946–950 (2010). S Chikkerur, V Sundaram, M Reisslein, LJ Karam, Objective video quality assessment methods: A classification, review, and performance comparison. Broadcasting IEEE Transac. 57(2), 165–182 (2011). BT Truong, S Venkatesh, Video abstraction: a systematic review and classification. ACM Trans Multimedia Comput. Commun. Appl. (TOMCCAP). 3(1), 3 (2007). W Hu, N Xie, L Li, X Zeng, S Maybank, A survey on visual content-based video indexing and retrieval. Syst Man Cybern. Part C: Appl Reviews, IEEE Trans. 41(6), 797–819 (2011). AG Money, H Agius, Video summarisation: a conceptual framework and survey of the state of the art. J Vis Commun Image Represent. 19(2), 121–143 (2008). Y Li, S-H Lee, C-H Yeh, C-CJ Kuo, Techniques for movie content analysis and skimming: tutorial and overview on video abstraction techniques. IEEE Signal Process Mag. 23(2), 79–89 (2006). K McHenry, P Bajcsy, An Overview of 3D Data Content, File Formats and Viewers. Technical Report NCSA-ISDA08-002 (2008). http://207.245.165.87/applied-research/papers/overview-3d-data.pdf. EH Adelson, JR Bergen, in The plenoptic function and the elements of early vision, ed. by M Landy, JA Movshon (MIT PressCambridge, 1991), pp. 3–20. ND Doulamis, AD Doulamis, YS Avrithis, KS Ntalianis, SD Kollias, Efficient summarization of stereoscopic video sequences. Circ Syst Video Technol IEEE Transac. 10(4), 501–517 (2000). L Ferreira, P Assuncao, LA da Silva Cruz, in Content-Based Multimedia Indexing (CBMI), 2013 11th International Workshop On. 3D video shot boundary detection based on clustering of depth-temporal features (IEEEVeszprem, 2013), pp. 1–6. Q Xu, P Wang, B Long, M Sbert, M Feixas, R Scopigno, in Systems Man and Cybernetics (SMC), 2010 IEEE International Conference On. Selection and 3D visualization of video key frames (Istanbul, 2010), pp. 52–59. K Papachristou, A Tefas, N Nikolaidis, I Pitas, in Machine Learning for Signal Processing (MLSP), 2014 IEEE International Workshop On. Stereoscopic video shot classification based on weighted linear discriminant analysis (Reims, 2014), pp. 1–6. J Lin, X Ruan, N Yu, R Wei, in The 27th Chinese Control and Decision Conference (2015 CCDC). One-shot learning gesture recognition based on improved 3D SMoSIFT feature descriptor from RGB-D videos (Qingdao, 2015), pp. 4911–4916. I Mademlis, N Nikolaidis, I Pitas, in Signal Processing Conference (EUSIPCO), 2015 23rd European. Stereoscopic video description for key-frame extraction in movie summarization (Nice, 2015), pp. 819–823. Y Fu, Y Guo, Y Zhu, F Liu, C Song, Z-H Zhou, Multi-view video summarization. Multimedia IEEE Transac. 12(7), 717–729 (2010). J Assa, Y Caspi, D Cohen-Or, Action synopsis: pose selection and illustration. ACM Trans Graphics (TOG). 24(3), 667–676 (2005). K-S Huang, C-F Chang, Y-Y Hsu, S-N Yang, Key probe: a technique for animation keyframe extraction. The Visual Computer. 21(8–10), 532–541 (2005). J Xu, T Yamasaki, K Aizawa, Summarization of 3D video by rate-distortion trade-off. IEICE Transactions on Information and Systems. 90(9), 1430–1438 (2007). T Yamasaki, K Aizawa, Motion segmentation and retrieval for 3D video based on modified shape distribution. EURASIP J. Appl. Signal Process. 2007(1), 211–211 (2007). Article MATH Google Scholar P Huang, A Hilton, J Starck, Shape similarity for 3D video sequences of people. Int J Comput Vision. 89(2), 362–381 (2010). T-Y Lee, C-H Lin, Y-S Wang, T-G Chen, Animation key-frame extraction and simplification using deformation analysis. Circ Syst Video Technol IEEE Trans. 18(4), 478–486 (2008). J Xu, T Yamasaki, K Aizawa, Temporal segmentation of 3-D video by histogram-based feature vectors. Circ Syst Video Technol IEEE Trans. 19(6), 870–881 (2009). SEF de Avila, APB ao Lopes, A da Luz Jr., A de Albuquerque Araúo, VSUMM: a mechanism designed to produce static video summaries and a novel evaluation method. Pattern Recogn Lett. 32(1), 56–68 (2011). Image Processing, Computer Vision and Pattern Recognition in Latin America. H-J Lee, HJ Shin, J-J Choi, Single image summarization of 3D animation using depth images. Comput Animat Virtual Worlds. 23(3–4), 417–424 (2012). Y-J Zhang, Advances in image and video segmentation (IRM Press, USA, 2006). J Yuan, H Wang, L Xiao, W Zheng, J Li, F Lin, B Zhang, A formal study of shot boundary detection. Circ Syst Video Technol IEEE Trans. 17(2), 168–186 (2007). AF Smeaton, P Over, AR Doherty, Video shot boundary detection: seven years of TRECVid activity. Comput Vision Image Underst. 114(4), 411–418 (2010). Special issue on Image and Video Retrieval Evaluation. C Cotsaces, N Nikolaidis, I Pitas, Video shot detection and condensed representation. A review. Signal Process Mag IEEE. 23(2), 28–37 (2006). J Nam, AH Tewfik, Detection of gradual transitions in video sequences using B-spline interpolation. Multimed IEEE Trans. 7(4), 667–679 (2005). S Lian, Automatic video temporal segmentation based on multiple features. Soft Computing. 15:, 469–482 (2011). P Sidiropoulos, V Mezaris, I Kompatsiaris, H Meinedo, M Bugalho, I Trancoso, Temporal video segmentation to scenes using high-level audiovisual features. Circ Syst Video Technol IEEE Trans. 21(8), 1163–1177 (2011). B-L Yeo, B Liu, Rapid scene analysis on compressed video. Circ Syst Video Technol IEEE Transactions on. 5(6), 533–544 (1995). Z Cernekova, I Pitas, C Nikou, Information theory-based shot cut/fade detection and video summarization. Circ Syst Video Technol IEEE Transactions on. 16(1), 82–91 (2006). PJ Besl, ND McKay, A method for registration of 3-D shapes. Pattern Anal Mach Intel IEEE Trans. 14(2), 239–256 (1992). B Ionescu, D Coquin, P Lambert, V Buzuloiu, A fuzzy color-based approach for understanding animated movies content in the indexing task. Eurasip J Image Video Process. 10(2008), 1–17 (2008). WAC Fernando, CN Canagarajah, DR Bull, in Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference On, 3. Fade and dissolve detection in uncompressed and compressed video sequences (Kobe, 1999), pp. 299–303. R Slama, H Wannous, M Daoudi, 3D human motion analysis framework for shape similarity and retrieval. Image Vision Comput. 32(2), 131–154 (2014). U Gargi, R Kasturi, SH Strayer, Performance characterization of video-shot-change detection methods. Circ Syst Video Technol IEEE Transactions on. 10(1), 1–13 (2000). Y Yang, X Liu, A re-examination of text categorization methods, 42–49 (1999). C Jin, T Fevens, S Mudur, Optimized keyframe extraction for 3D character animations. Comput Animat Virtual Worlds. 23(6), 559–568 (2012). L Ferreira, LA da Silva Cruz, P Assuncao, A generic framework for optimal 2D/3D key-frame extraction driven by aggregated saliency maps. Signal Processing: Image Commun. 39 Part A:, 98–110 (2015). A Rav-Acha, Y Pritch, S Peleg, in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference On, 1. Making a long video short: dynamic video synopsis, (2006), pp. 435–441. R Xu, ID Wunsch, Survey of clustering algorithms. Neural Netw IEEE Trans. 16(3), 645–678 (2005). D DeMenthon, V Kobla, D Doermann, in Proceedings of the Sixth ACM International Conference on Multimedia. MULTIMEDIA '98. Video summarization by curve simplification (ACMNew York, 1998), pp. 211–218. LJ Latecki, D de Wildt, J Hu, in Multimedia Signal Processing, 2001 IEEE Fourth Workshop On. Extraction of key frames from videos by optimal color composition matching and polygon simplification (Cannes, 2001), pp. 245–250. J Calic, E Izuierdo, in Information Technology: Coding and Computing, 2002. Proceedings. International Conference On. Efficient key-frame extraction and video analysis (Las Vegas, 2002), pp. 28–33. IS Lim, D Thalmann, in Engineering in Medicine and Biology Society, 2001. Proceedings of the 23rd Annual International Conference of the IEEE, 2. Key-posture extraction out of human motion data, (2001), pp. 1167–1169. ND Doulamis, AD Doulamis, Y Avrithis, SD Kollias, in Multimedia Signal Processing, 1999 IEEE 3rd Workshop On. A stochastic framework for optimal key frame extraction from MPEG video databases (Copenhagen, 1999), pp. 141–146. B-D Choi, J-W Han, C-S Kim, S-J Ko, Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation. Circ Syst Video Technol IEEE Trans. 17(4), 407–416 (2007). P Huang, A Hilton, J Starck, in 3DPVT '08: Proceedings of the Fourth International Symposium on 3D Data Processing, Visualization and Transmission. Automatic 3D video summarization: key frame extraction from self-similarity (IEEE Computer SocietyWashington, DC, 2008). Y Gong, X Liu, in Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference On, 2. Video summarization using singular value decomposition (Hilton Head Island, 2000), pp. 174–180. M Cooper, J Foote, in Multimedia Signal Processing, 2002 IEEE Workshop On. Summarizing video using non-negative similarity matrix factorization (St. Thomas, 2002), pp. 25–28. W Abd-Almageed, in Image Processing, 2008. ICIP 2008. 15th IEEE International Conference On. Online, simultaneous shot boundary detection and key frame extraction for sports videos using rank tracing (San Diego, 2008), pp. 3200–3203. CH Lee, A Varshney, DW Jacobs, Mesh saliency. ACM Trans. Graph.24(3), 659–666 (2005). C Halit, T Capin, Multiscale motion saliency for keyframe extraction from motion capture sequences. Comput Animat Virtual Worlds. 22(1), 3–14 (2011). C Nguyen, Y Niu, F Liu, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '12. Video summagator: an interface for video summarization and navigation (ACMNew York, 2012), pp. 647–650. K Schoeffmann, D Ahlstrom, MA Hudelist, 3D interfaces to improve the performance of visual known-item search. Multimed IEEE Trans. 16(7), 1942–1951 (2014). ITU-R Recommendations 500-13. Methodology for the Subjective Assessment of the Quality of Television Pictures (International Telecommunication UnionGeneva, 2012). Y-F Ma, X-S Hua, L Lu, H-J Zhang, A generic framework of user attention model and its application in video summarization. Multimedia IEEE Trans. 7(5), 907–919 (2005). J Peng, Q Xiao-Lin, Keyframe-based video summary using visual attention clues. IEEE Multimedia. 17(2), 64–73 (2010). N Ejaz, I Mehmood, S Wook Baik, Efficient visual attention based framework for extracting key frames from videos. Signal Process Image Commun. 28(1), 34–44 (2013). N Ejaz, I Mehmood, SW Baik, Feature aggregation based visual attention model for video summarization. Comput Electr Eng.40(3), 993–1005 (2014). P Mundur, Y Rao, Y Yesha, Keyframe-based video summarization using Delaunay clustering. Int. J. Digit. Libr.6(2), 219–232 (2006). M Furini, F Geraci, M Montangero, M Pellegrini, Stimo: still and moving video storyboard for the web scenario. Multimed Tools Appl. 46(1), 47–69 (2010). T-Y Liu, X-D Zhang, J Feng, K-T Lo, Shot reconstruction degree: a novel criterion for key frame selection. Pattern Recogn Lett.25:, 1451–1457 (2004). HS Chang, S Sull, SU Lee, Efficient video indexing scheme for content-based retrieval. Circ Syst Video Technol IEEE Trans. 9(8), 1269–1279 (1999). G Ciocca, R Schettini, An innovative algorithm for key frame extraction in video summarization. J Real-Time Image Process. 1(1), 69–88 (2006). Q-G Ji, Z-D Fang, Z-H Xie, Z-M Lu, Video abstraction based on the visual attention model and online clustering. Signal Process Image Commun. 28(3), 241–253 (2013). A Khodakovsky, P Schröder, W Sweldens, in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH '00. Progressive geometry compression (ACM Press/Addison-Wesley Publishing Co.New York, 2000), pp. 271–278. HM Briceño, PV Sander, L McMillan, S Gortler, H Hoppe, in Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. SCA '03. Geometry videos: a new representation for 3D animations (Eurographics AssociationAire-la-Ville, Switzerland, Switzerland, 2003), pp. 136–146. K Schoeffmann, F Hopfgartner, O Marques, L Boeszoermenyi, JM Jose, Video browsing interfaces and applications: a review. J Photonics Energy. 1:, 018004–01800435 (2010). B Bustos, DA Keim, D Saupe, T Schreck, Vranic, DV́, Feature-based similarity search in 3D object databases. ACM Comput Surv.37(4), 345–387 (2005). T Napoléon, H Sahbi, From 2D silhouettes to 3D object retrieval: contributions and benchmarking. J Image Video Process, 1 (2010). M Savelonas, I Pratikakis, K Sfikas, An overview of partial 3D object retrieval methodologies. Multimed Tools Appl,. 74:, 1–26 (2014). N Vretos, N Nikolaidis, I Pitas, in 3DTV-Conference: The True Vision—Capture, Transmission and Display of 3D Video (3DTV-CON), 2012. The use of audio-visual description profile in 3D video content description (Zurich, 2012), pp. 1–4. M Sano, W Bailer, A Messina, J-P Evain, M Matton, in IVMSP Workshop, 2013 IEEE 11th. The MPEG-7 audiovisual description profile (AVDP) and its application to multi-view video (Seoul, 2013), pp. 1–4. V Blanz, T Vetter, in Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH '99. A morphable model for the synthesis of 3D faces (ACM Press/Addison-Wesley Publishing Co.New York, 1999), pp. 187–194. C Shelton, Morphable surface models. Int J Comput Vis. 38(1), 75–91 (2000). J Xu, T Yamasaki, K Aizawa, in 3D Data Processing, Visualization, and Transmission, Third International Symposium on. Motion editing in 3D video database, (2006), pp. 472–479. R Parent, Computer animation: algorithms and techniques (Morgan-Kaufmann, USA, 2012). X Cao, Z Li, Q Dai, Semi-automatic 2D-to-3D conversion using disparity propagation. Broadcasting, IEEE Transactions on. 57(2), 491–499 (2011). W-N Lie, C-Y Chen, W-C Chen, 2D to 3D video conversion with key-frame depth propagation and trilateral filtering. Electronics Letters. 47(5), 319–321 (2011). D Wang, J Liu, J Sun, W Liu, Y Li, in Broadband Multimedia Systems and Broadcasting (BMSB), 2012 IEEE International Symposium On. A novel key-frame extraction method for semi-automatic 2D-to-3D video conversion (Seoul, 2012), pp. 1–5. K Ju, H Xiong, in Proc. SPIE, 9273. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection (SPIEBeijing, 2014), pp. 92730M1–92730M8. J-L Lai, Y Yi, Key frame extraction based on visual attention model. J Vis Commun Image Represent. 23(1), 114–125 (2012). H Schwarz, D Marpe, T Wiegand, Overview of the scalable video coding extension of the H.264/AVC standard. Circ Syst Video Technol IEEE Trans. 17(9), 1103–1120 (2007). L Ferreira, L Cruz, P Assuncao, in EUROCON 2011 IEEE. Efficient scalable coding of video summaries using dynamic GOP structures (Lisbon, 2011), pp. 1–4. L Herranz, S Jiang, Scalable storyboards in handheld devices: applications and evaluation metrics. Multimed Tools Appl. 75:, 1–29 (2015). P Over, AF Smeaton, G Awad, in Proceedings of the 2Nd ACM TRECVid Video Summarization Workshop. TVS '08. The trecvid 2008 BBC rushes summarization evaluation (ACMNew York, 2008), pp. 1–20. P Over, AF Smeaton, P Kelly, in Proceedings of the International Workshop on TRECVID Video Summarization. TVS '07. The TRECVID 2007 BBC rushes summarization evaluation pilot (ACMNew York, 2007), pp. 1–15. J Bescos, JM Martinez, L Herranz, F Tiburzi, Content-driven adaptation of on-line video. Signal Process Image Commun. 22(7–8), 651–668 (2007). M Mrak, J Calic, A Kondoz, Fast analysis of scalable video for adaptive browsing interfaces. Comp Vision Image Underst. 113(3), 425–434 (2009). L Herranz, J Martínez, An integrated approach to summarization and adaptation using H.264/MPEG-4 SVC. Signal Processing: Image Comm.24(6), 499–509 (2009). This work was supported by the R&D Unit UID/EEA/ 50008/2013, Project 3DVQM and PhD Grant SFRH/ BD/37510/2007, co-funded by FEDER-PT2020, FCT/ MEC, Portugal. LF read and summarised some of the scientific articles reviewed and drafted the manuscript. LC and PA read and summarised some of the scientific articles reviewed, wrote some sections and revised the manuscript. All authors read and approved the final manuscript. Instituto de Telecomunicações (Leiria), Campus 2 Morro do Lena—Alto do Vieiro, Leiria, 2411-901, Portugal Lino Ferreira & Pedro Assuncao Dep. de Engenharia Electrotécnica e de Computadores, Universidade de Coimbra, Pólo II da UC, Coimbra, 3030-290, Portugal Lino Ferreira & Luis A. da Silva Cruz Instituto de Telecomunicações (Coimbra), Dep. de Engenharia Electrotécnica e de Computadores, Pólo II da UC, Coimbra, 3030-290, Portugal Luis A. da Silva Cruz Instituto Politécnico de Leiria/ESTG, Campus 2 Morro do Lena—Alto do Vieiro, Leiria, 2411-901, Portugal Lino Ferreira Pedro Assuncao Correspondence to Lino Ferreira. Ferreira, L., da Silva Cruz, L.A. & Assuncao, P. Towards key-frame extraction methods for 3D video: a review. J Image Video Proc. 2016, 28 (2016). https://doi.org/10.1186/s13640-016-0131-8 3D key-frames extraction 3D video summarisation
CommonCrawl
Aircraft Design Pilot Groundschool Quick Concepts Airspeed Conversions Standard Atmosphere Crosswind Calculator Airframe Structure And Control Surfaces Wing Loads and Structural Layout This tutorial focuses on the structural design of the wing and introduces the control surfaces attached to the wing's trailing edge. Andrew Wood • 28 September 2022 Airframe Structure Wing Loads This is part three in a five-part series on airframe structures and control surfaces. This tutorial focuses on the structural design of an aircraft wing and introduces the various control surfaces attached to the wing's trailing edge. A wing is designed to produce sufficient lift to support the aircraft throughout its design envelope. Every wing is therefore designed to produce and support a multiple of the total weight of the airplane. This is termed the load factor and was discussed in part one of this series. Most general aviation aircraft are designed to a load factor of between four and six. The various components that make up the wing structure must be capable of supporting this aerodynamic load throughout the certified design envelope. A wing is not designed to produce an equal upward force at all points along the span but rather produces the greatest percentage of the total lift closer to the root, diminishing outwards towards the span. Figure 1: Full span lift distribution on an aircraft The maximum wing loads are seen at the wing root where the wing attaches to the fuselage. The two primary contributors to the total stress are the vertical lift force and the resulting bending moment. The wing also tends to pitch up and down during flight which is reacted at the root by a torque at the attachment points. Figure 2: Lift distribution and wing bending moment Induced drag is formed as a by-product of the lift generated, and along with profile drag introduce forces into the wing which tend to push the wing backward. While the magnitude of the drag force produced is a lot smaller than the lift, the structure must still be designed to support these forces at the limits of the design envelope. Wing Layout Wing Configurations There are many different wing configurations in use today. We can broadly classify a wing-fuselage interface in terms of three design variables: the number of wings used to produce the required lift, the location of the wing, and the wing-fuselage attachment methodology. Number of Wings A triplane has three wings, a biplane two, and a monoplane the most common configuration in use today, has a single primary lifting surface. Figure 3: Boeing Stearman Bi-plane (top) and Piper PA-28 monoplane (bottom) Wing Location Wings can be located above the fuselage (high wing), through the center of the fuselage (mid wing), or towards the bottom of the fuselage (low wing). Figure 4: Low wing Piper Seminole (top) and high wing Cessna Caravan Cantilevered or Braced A cantilevered wing has no external bracing and is connected to the fuselage only at the root. Many light aircraft make use of a strut which reduces the bending moment at the wing root, allowing a smaller (lighter) wing-to-fuselage attachment. The strut may reduce the bending at the root but does produce more drag than an equivalent cantilevered wing. Figure 5: Braced Cessna 172 and cantilevered Cessna 210 Designing the planform or shape of a wing is a complicated process undertaken to optimize the aircraft for a particular mission. In our Fundamentals of Aircraft Design series there are three posts dedicated to preliminary wing design. You are encouraged to go and read through the posts on wing area and aspect ratio, sweep and airfoil aerodynamics if you are interested. Here we will briefly touch on two wing design variables: the planform wing area and the aspect ratio, which are two primary drivers behind the performance of a general aviation wing. The wing area is defined as the planform surface area of the wing. This is the area of the wing when viewed from directly above the aircraft. Figure 6: The planform wing area and aspect ratio of a tapered wing The aspect ratio is the ratio of the span of the wing to its chord. There are very few perfectly rectangular wings and so a little manipulation is required in order to calculate the aspect ratio of a tapered wing. $$ AR = \frac{b^2}{A} $$ Figure 7: Calculation of the aircraft wing Aspect Ratio Lift is an aerodynamic force which is produced as a consequence of the curvature of the wing and the angle of attack of the relative velocity flowing over the surface. It follows that larger wings of a greater planform area are able to produce more lift; this is easily shown mathematically from the lift formula: $$ L = \frac{1}{2} \rho V^{2} A C_{L} $$ \( L: \) Total Lift Force \( \rho: \) Air density \( V: \) Velocity \( A \) Planform Wing Area \( C_{L}: \) Lift Coefficient The total lift force is increased in proportion with the wing area. A better gauge of the relative size of the wing is the wing loading which is calculated by dividing the aircraft mass by the wing area. $$ WL = \frac{Weight}{Wing \ Area} $$ If we assume that the aircraft is flying at a 1g load factor then the lift will be equal to the weight and the lift formula can be rearranged in terms of velocity. The effect that wing loading has on cruise speed can be shown by comparing two general aviation aircraft with two very different wing loadings: the Cessna 172 and the Lancair Legacy. Figure 8: Lancair Legacy (top) and Cessna 172 (bottom have very different wing loadings Lancair Legacy Ratio: Legacy/172 Gross Weight 1111 998 kg Wing Area 16.2 7.66 m² 0.47 Wing Loading 68.6 130.3 kg/m² 1.90 Cruise Speed 122 240 knots 1.96 Stall Speed 47 56 knots 1.19 $$ V_{cruise} = \frac{2 WL}{\rho C_{L_{cruise}}} $$ The lift formula is rearranged to determine speed as a function of wing loading and the lift coefficient. If we assume that the lift coefficient is approximately constant between the two aircraft during cruise (this is an acceptable assumption here to demonstrate the concept of wing loading), then we can compare the effect that wing loading has on the resulting cruise speed. An increased wing loading corresponds to a smaller wing at a given mass, and results in an increased cruise speed. Of course the Legacy has a much larger engine which allows it to reach a far higher cruise speed (drag is proportional to V^2), but the point still stands that an aircraft that is designed to cruise at higher speeds will do so most efficiently with a higher wing loading. The highly loaded wing also results in a higher stall speed (clean), and a more complicated flap arrangement (greater increase in lift coefficient) is thus required to reduce the stall speed. The aspect ratio was introduced in the section above and is a measure of the shape of the wing. On a rectangular wing it is determined by the ratio of the span to chord. On a tapered wing it can be found using the formula: High aspect ratio wings are long and thin while low aspect ratio wings are short and stubby. The aspect ratio plays an important role in determining the amount of lift-induced drag generated. $$ C_{D_{i}} = \frac{C_{L}^{2}}{\pi AR e} $$ \( C_{D_{i}}: \) Lift-induced Drag Coefficient \( AR \) Wing Aspect Ratio \( e: \) Oswald Efficiency Factor Figure 9: A comparison of the aspect ratio between a Cessna 152 and Dash 8 Q400 Higher aspect ratio wings result in a lower lift-induced drag coefficient. This is why gliders have long slender wings (high AR) as drag minimization is paramount to obtain the best glide ratio. A high aspect ratio wing is more structurally challenging to design, as the wing will flex more in flight, creating larger bending stresses and a damped roll control response. Structural flutter is also more prevalent in higher aspect ratio wings. Flaps and ailerons are located at the trailing edge of the wing. Both control surfaces work by modifying the local camber and lift distribution over the area in which they operate. Ailerons are used for roll control and are located at the outboard section of each wing. Flaps are located inboard of the ailerons and are used to generated additional lift at low speeds through symmetrical deployment. Figure 10: Location of flaps and ailerons on a Cessna 152 The flaps and ailerons are attached to a rear spar which runs along the span. The spar is designed to resist and transfer the loads generated by the deflection of the control surfaces. Trailing edge flaps are one of two devices used to extract additional lift from a wing at low speed. Slats modify the camber at the leading edge, performing a similar roll to the flaps. High-lift devices are a large topic on their own and are discussed in detail in Part 4 of this mini-series. Ailerons are used to provide roll control and do so by generating a large rolling moment through asymmetrical deflection. The figure below demonstrates a roll to the left. The aileron on the right wing deflects downwards which produces additional upward lift on the right wing. The left aileron deflects upward which modifies the flow field, generating a downforce at the left wingtip. Together these deflections generate a rolling moment which forces the right wing up, and the left wing down. Figure 11: Ailerons generate a rolling moment around the longitudinal axis Semi-monocoque Wing The various structural design methodologies were discussed in part one of this series. This discussion on the structural design of a wing only considers the semi-monocoque design philosophy as it is the most popular structural layout in use today. In a semi-monocoque structure both the outer skin and the internal substructure are load bearing, and both contribute to the overall stiffness of the structure. A semi-monocoque structure is well suited to being built from aluminium as the material is both light and strong. The density of an aluminium alloy is approximately one-third that of steel which allows for thicker structural sections to be built from aluminium than would be possible with a steel structure of equivalent mass. Thicker skins are advantageous as these are less likely to buckle under load. This allows for an efficient structure to be constructed as the wing skins can be used to distribute and carry the loads generated by the wing. A typical wing internal structural layout is shown in the image below: Figure 12: Internal structure of a semi-monocoque aircraft wing A wing is comprised of four principle structural components that work together to support and distribute the aerodynamic forces produced during flight. Wing Spars These make up the longitudinal components of the structure. A spar is made up of two components: the spar web and the spar caps. The two components typically are arranged to form an I-section. Figure 13: Aircraft main spar composed of spar caps and a shear web The spar caps are designed to the carry axial loads (tension and compression) that arise from the bending moment produced by the wing under load. In a positive g manoeuvre, the spar caps on the upper surface of the wing are in compression and the lower spar caps surface in tension. Figure 14: Bending moment generated by wing lift distribution The cross-sectional areas of the spar caps determine how much load each can support. Since the bending moment is greatest at the root of the wing and smallest at the tip, it is common for the spar caps to be tapered from root to tip in order to minimize the structural mass of the wing. The spar web separates the upper and lower spar caps and carries the vertical shear load that the wing produces. The web also adds torsional stiffness to the wing and feeds load into the spar caps through shear flow. Stiffeners These are longitudinal components that perform a similar function to the spar caps in that they carry axial loads that arise from the bending of the wing. The stiffeners are spaced laterally through the wing to support the wing skins against buckling. Wing ribs are spaced along the span of the wing and give the wing its aerodynamic shape. The ribs, spar caps, and stiffeners form bays throughout the wing that support the wing skins against buckling. Any point loads introduced into the wing are done so at ribs which form hardpoints. Landing gear legs and engine mounts are supported by especially sturdy ribs, as the loads introduced by these components can be very large. The wing skins is a semi-monocoque structure are load bearing and carry and transmit shear loads into the neighbouring spar caps and stiffeners. This concludes this post on the wing structural layout. The next post provides a more detailed look at the design and operation of a typical high-lift system. If you enjoyed this post or found it useful as a study aid, then please introduce your colleagues and friends to AeroToolbox.com and share this on your favorite social media platform. Thanks for reading. This article is part of a series on Airframe Structure And Control Surfaces. Aircraft Flap And Slat Systems Aircraft Technical Aeronautical Calculators Airspeed Conversions (CAS/EAS/TAS/Mach) Standard Atmosphere Calculator Fundamentals Of Aircraft Design Aerodynamic Lift, Drag and Moment Coefficients Aircraft Wing Area and Aspect Ratio Aircraft Horizontal and Vertical Tail Design PPL Groundschool © 2023 AeroToolbox.com | Built in Python by Canard Analytics | Cookies Ahead By continuing here you are consenting to their use. Please refer to our privacy policy for further information.
CommonCrawl
I'm wary of others, though. The trouble with using a blanket term like "nootropics" is that you lump all kinds of substances in together. Technically, you could argue that caffeine and cocaine are both nootropics, but they're hardly equal. With so many ways to enhance your brain function, many of which have significant risks, it's most valuable to look at nootropics on a case-by-case basis. Here's a list of 9 nootropics, along with my thoughts on each. Fortunately for me, the FDA decided Smart Powder's advertising was too explicit and ordered its piracetam sales stopped; I was equivocal at the previous price point, but then I saw that between the bulk discount and the fire-sale coupon, 3kg was only $99.99 (shipping was amortized over that, the choline, caffeine, and tryptophan). So I ordered in September 2010. As well, I had decided to cap my own pills, eliminating the inconvenience and bad taste. 3kg goes a very long way so I am nowhere close to running out of my pills; there is nothing to report since, as the pills are simply part of my daily routine. The first night I was eating some coconut oil, I did my n-backing past 11 PM; normally that damages my scores, but instead I got 66/66/75/88/77% (▁▁▂▇▃) on D4B and did not feel mentally exhausted by the end. The next day, I performed well on the Cambridge mental rotations test. An anecdote, of course, and it may be due to the vitamin D I simultaneously started. Or another day, I was slumped under apathy after a promising start to the day; a dose of fish & coconut oil, and 1 last vitamin D, and I was back to feeling chipper and optimist. Unfortunately I haven't been testing out coconut oil & vitamin D separately, so who knows which is to thank. But still interesting. The term "smart pills" refers to miniature electronic devices that are shaped and designed in the mold of pharmaceutical capsules but perform highly advanced functions such as sensing, imaging and drug delivery. They may include biosensors or image, pH or chemical sensors. Once they are swallowed, they travel along the gastrointestinal tract to capture information that is otherwise difficult to obtain, and then are easily eliminated from the system. Their classification as ingestible sensors makes them distinct from implantable or wearable sensors. "As a neuro-optometrist who cares for many brain-injured patients experiencing visual challenges that negatively impact the progress of many of their other therapies, Cavin's book is a god-send! The very basic concept of good nutrition among all the conflicting advertisements and various "new" food plans and diets can be enough to put anyone into a brain fog much less a brain injured survivor! Cavin's book is straightforward and written from not only personal experience but the validation of so many well-respected contemporary health care researchers and practitioners! I will certainly be recommending this book as a "Survival/Recovery 101" resource for all my patients including those without brain injuries because we all need optimum health and well-being and it starts with proper nourishment! Kudos to Cavin Balaster!" "I think you can and you will," says Sarter, but crucially, only for very specific tasks. For example, one of cognitive psychology's most famous findings is that people can typically hold seven items of information in their working memory. Could a drug push the figure up to nine or 10? "Yes. If you're asked to do nothing else, why not? That's a fairly simple function." On the other end of the spectrum is the nootropic stack, a practice where individuals create a cocktail or mixture of different smart drugs for daily intake. The mixture and its variety actually depend on the goals of the user. Many users have said that nootropic stacking is more effective for delivering improved cognitive function in comparison to single nootropics. And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy. In 2011, as part of the Silk Road research, I ordered 10x100mg Modalert (5btc) from a seller. I also asked him about his sourcing, since if it was bad, it'd be valuable to me to know whether it was sourced from one of the vendors listed in my table. He replied, more or less, I get them from a large Far Eastern pharmaceuticals wholesaler. I think they're probably the supplier for a number of the online pharmacies. 100mg seems likely to be too low, so I treated this shipment as 5 doses: Another empirical question concerns the effects of stimulants on motivation, which can affect academic and occupational performance independent of cognitive ability. Volkow and colleagues (2004) showed that MPH increased participants' self-rated interest in a relatively dull mathematical task. This is consistent with student reports that prescription stimulants make schoolwork seem more interesting (e.g., DeSantis et al., 2008). To what extent are the motivational effects of prescription stimulants distinct from their cognitive effects, and to what extent might they be more robust to differences in individual traits, dosage, and task? Are the motivational effects of stimulants responsible for their usefulness when taken by normal healthy individuals for cognitive enhancement? Some data suggest that cognitive enhancers do improve some types of learning and memory, but many other data say these substances have no effect. The strongest evidence for these substances is for the improvement of cognitive function in people with brain injury or disease (for example, Alzheimer's disease and traumatic brain injury). Although "popular" books and companies that sell smart drugs will try to convince you that these drugs work, the evidence for any significant effects of these substances in normal people is weak. There are also important side-effects that must be considered. Many of these substances affect neurotransmitter systems in the central nervous system. The effects of these chemicals on neurological function and behavior is unknown. Moreover, the long-term safety of these substances has not been adequately tested. Also, some substances will interact with other substances. A substance such as the herb ma-huang may be dangerous if a person stops taking it suddenly; it can also cause heart attacks, stroke, and sudden death. Finally, it is important to remember that products labeled as "natural" do not make them "safe." That left me with 329 days of data. The results are that (correcting for the magnesium citrate self-experiment I was running during the time period which did not turn out too great) days on which I happened to use my LED device for LLLT were much better than regular days. Below is a graph showing the entire MP dataseries with LOESS-smoothed lines showing LLLT vs non-LLLT days: The search to find more effective drugs to increase mental ability and intelligence capacity with neither toxicity nor serious side effects continues. But there are limitations. Although the ingredients may be separately known to have cognition-enhancing effects, randomized controlled trials of the combined effects of cognitive enhancement compounds are sparse. Taurine (Examine.com) was another gamble on my part, based mostly on its inclusion in energy drinks. I didn't do as much research as I should have: it came as a shock to me when I read in Wikipedia that taurine has been shown to prevent oxidative stress induced by exercise and was an antioxidant - oxidative stress is a key part of how exercise creates health benefits and antioxidants inhibit those benefits. Neuro Optimizer is Jarrow Formula's offering on the nootropic industry, taking a more creative approach by differentiating themselves as not only a nootropic that enhances cognitive abilities, but also by making sure the world knows that they have created a brain metabolizer. It stands out from all the other nootropics out there in this respect, as well as the fact that they've created an all-encompassing brain capsule. What do they really mean by brain metabolizer, though? It means that their capsule is able to supply nutrition… Learn More... The U.S. Centers for Disease Control and Prevention estimates that gastrointestinal diseases affect between 60 and 70 million Americans every year. This translates into tens of millions of endoscopy procedures. Millions of colonoscopy procedures are also performed to diagnose or screen for colorectal cancers. Conventional, rigid scopes used for these procedures are uncomfortable for patients and may cause internal bruising or lead to infection because of reuse on different patients. Smart pills eliminate the need for invasive procedures: wireless communication allows the transmission of real-time information; advances in batteries and on-board memory make them useful for long-term sensing from within the body. The key application areas of smart pills are discussed below. Creatine is a substance that's produced in the human body. It is initially produced in the kidneys, and the process is completed in the liver. It is then stored in the brain tissues and muscles, to support the energy demands of a human body. Athletes and bodybuilders use creatine supplements to relieve fatigue and increase the recovery of the muscle tissues affected by vigorous physical activities. Apart from helping the tissues to recover faster, creatine also helps in enhancing the mental functions in sleep-deprived adults, and it also improves the performance of difficult cognitive tasks. Remembering what Wedrifid told me, I decided to start with a quarter of a piece (~1mg). The gum was pretty tasteless, which ought to make blinding easier. The effects were noticeable around 10 minutes - greater energy verging on jitteriness, much faster typing, and apparent general quickening of thought. Like a more pleasant caffeine. While testing my typing speed in Amphetype, my speed seemed to go up >=5 WPM, even after the time penalties for correcting the increased mistakes; I also did twice the usual number without feeling especially tired. A second dose was similar, and the third dose was at 10 PM before playing Ninja Gaiden II seemed to stop the usual exhaustion I feel after playing through a level or so. (It's a tough game, which I have yet to master like Ninja Gaiden Black.) Returning to the previous concern about sleep problems, though I went to bed at 11:45 PM, it still took 28 minutes to fall sleep (compared to my more usual 10-20 minute range); the next day I use 2mg from 7-8PM while driving, going to bed at midnight, where my sleep latency is a more reasonable 14 minutes. I then skipped for 3 days to see whether any cravings would pop up (they didn't). I subsequently used 1mg every few days for driving or Ninja Gaiden II, and while there were no cravings or other side-effects, the stimulation definitely seemed to get weaker - benefits seemed to still exist, but I could no longer describe any considerable energy or jitteriness. Among the questions to be addressed in the present article are, How widespread is the use of prescription stimulants for cognitive enhancement? Who uses them, for what specific purposes? Given that nonmedical use of these substances is illegal, how are they obtained? Furthermore, do these substances actually enhance cognition? If so, what aspects of cognition do they enhance? Is everyone able to be enhanced, or are some groups of healthy individuals helped by these drugs and others not? The goal of this article is to address these questions by reviewing and synthesizing findings from the existing scientific literature. We begin with a brief overview of the psychopharmacology of the two most commonly used prescription stimulants. Nootropics are a specific group of smart drugs. But nootropics aren't the only drugs out there that promise you some extra productivity. More students and office workers are using drugs to increase their productivity than ever before [79]. But unlike with nootropics, many have side-effects. And that is precisely what is different between nootropics and other enhancing drugs, nootropics have little to no negative side-effects. I have also tried to get in contact with senior executives who have experience with these drugs (either themselves or in their firms), but without success. I have to wonder: Are they completely unaware of the drugs' existence? Or are they actively suppressing the issue? For now, companies can ignore the use of smart drugs. And executives can pretend as if these drugs don't exist in their workplaces. But they can't do it forever. From its online reputation and product presentation to our own product run, Synagen IQ smacks of mediocre performance. A complete list of ingredients could have been convincing and decent, but the lack of information paired with the potential for side effects are enough for beginners to old-timers in nootropic use to shy away and opt for more trusted and reputable brands. There is plenty that needs to be done to uplift the brand and improve its overall ranking in the widely competitive industry. Learn More... Smart drugs offer significant memory enhancing benefits. Clinical studies of the best memory pills have shown gains to focus and memory. Individuals seek the best quality supplements to perform better for higher grades in college courses or become more efficient, productive, and focused at work for career advancement. It is important to choose a high quality supplement to get the results you want. A study mentioned in Neuropsychopharmacology as of August 2002, revealed that Bacopa Monnieri decreases the rate of forgetting newly acquired information, memory consolidations, and verbal learning rate. It also helps in enhancing the nerve impulse transmission, which leads to increased alertness. It is also known to relieve the effects of anxiety and depression. All these benefits happen as Bacopa Monnieri dosage helps in activating choline acetyltransferase and inhibiting acetylcholinesterase which enhances the levels of acetylcholine in the brain, a chemical that is also associated in improving memory and attention. Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185). We'd want 53 pairs, but Fitzgerald 2012's experimental design called for 32 weeks of supplementation for a single pair of before-after tests - so that'd be 1664 weeks or ~54 months or ~4.5 years! We can try to adjust it downwards with shorter blocks allowing more frequent testing; but problematically, iodine is stored in the thyroid and can apparently linger elsewhere - many of the cited studies used intramuscular injections of iodized oil (as opposed to iodized salt or kelp supplements) because this ensured an adequate supply for months or years with no further compliance by the subjects. If the effects are that long-lasting, it may be worthless to try shorter blocks than ~32 weeks. Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit. The infinite promise of stacking is why, whatever weight you attribute to the evidence of their efficacy, nootropics will never go away: With millions of potential iterations of brain-enhancing regimens out there, there is always the tantalizing possibility that seekers haven't found the elusive optimal combination of pills and powders for them—yet. Each "failure" is but another step in the process-of-elimination journey to biological self-actualization, which may be just a few hundred dollars and a few more weeks of amateur alchemy away. Clearly, the hype surrounding drugs like modafinil and methylphenidate is unfounded. These drugs are beneficial in treating cognitive dysfunction in patients with Alzheimer's, ADHD or schizophrenia, but it's unlikely that today's enhancers offer significant cognitive benefits to healthy users. In fact, taking a smart pill is probably no more effective than exercising or getting a good night's sleep. "I love this book! As someone that deals with an autoimmune condition, I deal with sever brain fog. I'm currently in school and this has had a very negative impact on my learning. I have been looking for something like this to help my brain function better. This book has me thinking clearer, and my memory has improved. I'm eating healthier and overall feeling much better. This book is very easy to follow and also has some great recipes included." Despite decades of study, a full picture has yet to emerge of the cognitive effects of the classic psychostimulants and modafinil. Part of the problem is that getting rats, or indeed students, to do puzzles in laboratories may not be a reliable guide to the drugs' effects in the wider world. Drugs have complicated effects on individuals living complicated lives. Determining that methylphenidate enhances cognition in rats by acting on their prefrontal cortex doesn't tell you the potential impact that its effects on mood or motivation may have on human cognition. "I cannot overstate how grateful I am to Cavin for having published this book (and launched his podcast) before I needed it. I am 3.5 months out from a concussion and struggling to recover that final 25% or so of my brain and function. I fully believe that diet and lifestyle can help heal many of our ills, and this book gives me a path forward right now. Gavin's story is inspiring, and his book is well-researched and clearly written. I am a food geek and so innately understand a lot of his advice — I'm not intimidated by the thought of drastically changing my diet because I know well how to shop and cook for myself — but I so appreciate how his gentle approach and stories about his own struggles with a new diet might help people who would find it all daunting. I am in week 2 of following his advice (and also Dr. Titus Chiu's BrainSave plan). It's not an instantaneous miracle cure, but I do feel better in several ways that just might be related to this diet." How exactly – and if – nootropics work varies widely. Some may work, for example, by strengthening certain brain pathways for neurotransmitters like dopamine, which is involved in motivation, Barbour says. Others aim to boost blood flow – and therefore funnel nutrients – to the brain to support cell growth and regeneration. Others protect brain cells and connections from inflammation, which is believed to be a factor in conditions like Alzheimer's, Barbour explains. Still others boost metabolism or pack in vitamins that may help protect the brain and the rest of the nervous system, explains Dr. Anna Hohler, an associate professor of neurology at Boston University School of Medicine and a fellow of the American Academy of Neurology. If you want to make sure that whatever you're taking is safe, search for nootropics that have been backed by clinical trials and that have been around long enough for any potential warning signs about that specific nootropic to begin surfacing. There are supplements and nootropics that have been tested in a clinical setting, so there are options out there. Adderall increases dopamine and noradrenaline availability within the prefrontal cortex, an area in which our memory and attention are controlled. As such, this smart pill improves our mood, makes us feel more awake and attentive. It is also known for its lasting effect – depending on the dose, it can last up to 12 hours. However, note that it is crucial to get confirmation from your doctor on the exact dose you should take. In my last post, I talked about the idea that there is a resource that is necessary for self-control…I want to talk a little bit about the candidate for this resource, glucose. Could willpower fail because the brain is low on sugar? Let's look at the numbers. A well-known statistic is that the brain, while only 2% of body weight, consumes 20% of the body's energy. That sounds like the brain consumes a lot of calories, but if we assume a 2,400 calorie/day diet - only to make the division really easy - that's 100 calories per hour on average, 20 of which, then, are being used by the brain. Every three minutes, then, the brain - which includes memory systems, the visual system, working memory, then emotion systems, and so on - consumes one (1) calorie. One. Yes, the brain is a greedy organ, but it's important to keep its greediness in perspective… Suppose, for instance, that a brain in a person exerting their willpower - resisting eating brownies or what have you - used twice as many calories as a person not exerting willpower. That person would need an extra one third of a calorie per minute to make up the difference compared to someone not exerting willpower. Does exerting self control burn more calories? I had tried 8 randomized days like the Adderall experiment to see whether I was one of the people whom modafinil energizes during the day. (The other way to use it is to skip sleep, which is my preferred use.) I rarely use it during the day since my initial uses did not impress me subjectively. The experiment was not my best - while it was double-blind randomized, the measurements were subjective, and not a good measure of mental functioning like dual n-back (DNB) scores which I could statistically compare from day to day or against my many previous days of dual n-back scores. Between my high expectation of finding the null result, the poor experiment quality, and the minimal effect it had (eliminating an already rare use), the value of this information was very small. While the commentary makes effective arguments — that this isn't cheating, because cheating is based on what the rules are; that this is fair, because hiring a tutor isn't outlawed for being unfair to those who can't afford it; that this isn't unnatural, because humans with computers and antibiotics have been shaping what is natural for millennia; that this isn't drug abuse anymore than taking multivitamins is — the authors seem divorced from reality in the examples they provide of effective stimulant use today. "In the hospital and ICU struggles, this book and Cavin's experience are golden, and if we'd have had this book's special attention to feeding tube nutrition, my son would be alive today sitting right here along with me saying it was the cod liver oil, the fish oil, and other nutrients able to be fed to him instead of the junk in the pharmacy tubes, that got him past the liver-test results, past the internal bleeding, past the brain difficulties controlling so many response-obstacles back then. Back then, the 'experts' in rural hospitals were unwilling to listen, ignored my son's unexpected turnaround when we used codliver oil transdermally on his sore skin, threatened instead to throw me out, but Cavin has his own proof and his accumulated experience in others' journeys. Cavin's boxed areas of notes throughout the book on applying the brain nutrient concepts in feeding tubes are powerful stuff, details to grab onto and run with… hammer them! Took pill 1:27 PM. At 2 my hunger gets the best of me (despite my usual tea drinking and caffeine+piracetam pills) and I eat a large lunch. This makes me suspicious it was placebo - on the previous days I had noted a considerable appetite-suppressant effect. 5:25 PM: I don't feel unusually tired, but nothing special about my productivity. 8 PM; no longer so sure. Read and excerpted a fair bit of research I had been putting off since the morning. After putting away all the laundry at 10, still feeling active, I check. It was Adderall. I can't claim this one either way. By 9 or 10 I had begun to wonder whether it was really Adderall, but I didn't feel confident saying it was; my feeling could be fairly described as 50%. It may also be necessary to ask not just whether a drug enhances cognition, but in whom. Researchers at the University of Sussex have found that nicotine improved performance on memory tests in young adults who carried one variant of a particular gene but not in those with a different version. In addition, there are already hints that the smarter you are, the less smart drugs will do for you. One study found that modafinil improved performance in a group of students whose mean IQ was 106, but not in a group with an average of 115. More recently, the drug modafinil (brand name: Provigil) has become the brain-booster of choice for a growing number of Americans. According to the FDA, modafinil is intended to bolster "wakefulness" in people with narcolepsy, obstructive sleep apnea or shift work disorder. But when people without those conditions take it, it has been linked with improvements in alertness, energy, focus and decision-making. A 2017 study found evidence that modafinil may enhance some aspects of brain connectivity, which could explain these benefits. Adaptogens are plant-derived chemicals whose activity helps the body maintain or regain homeostasis (equilibrium between the body's metabolic processes). Almost without exception, adaptogens are available over-the-counter as dietary supplements, not controlled drugs. Well-known adaptogens include Ginseng, Kava Kava, Passion Flower, St. Johns Wort, and Gotu Kola. Many of these traditional remedies border on being "folk wisdom," and have been in use for hundreds or thousands of years, and are used to treat everything from anxiety and mild depression to low libido. While these smart drugs work in a many different ways (their commonality is their resultant function within the body, not their chemical makeup), it can generally be said that the cognitive boost users receive is mostly a result of fixing an imbalance in people with poor diets, body toxicity, or other metabolic problems, rather than directly promoting the growth of new brain cells or neural connections. The magnesium was neither randomized nor blinded and included mostly as a covariate to avoid confounding (the Noopept coefficient & t-value increase somewhat without the Magtein variable), so an OR of 1.9 is likely too high; in any case, this experiment was too small to reliably detect any effect (~26% power, see bootstrap power simulation in the magnesium section) so we can't say too much. Piracetam boosts acetylcholine function, a neurotransmitter responsible for memory consolidation. Consequently, it improves memory in people who suffer from age-related dementia, which is why it is commonly prescribed to Alzheimer's patients and people struggling with pre-dementia symptoms. When it comes to healthy adults, it is believed to improve focus and memory, enhancing the learning process altogether.
CommonCrawl
Line Integrals and Surface Integrals Can someone please explain what surface integrals and line integrals are measuring? Is a line integral the arc length along a surface, and a surface integral is the surface area? Also, why is a line integral equal to $0$ on a conservative closed path? multivariable-calculus intuition $\begingroup$ Line integral are not always equal to zero around a closed path ~ That's the case only if the function in question is conservative (i.e. it has zero curl in a compact domain). $\endgroup$ – Ayesha $\begingroup$ OK thanks. But what does that mean exactly? $\endgroup$ $\begingroup$ do you mean line and surface integrals of vector fields, or of scalar fields? "conservative," in your last question, refers to vector fields. in that case, if you interpret the field as a force field, a line integral is the work done by the field around the curve, and a surface integral is the flux of the field through the surface. $\endgroup$ – symplectomorphic $\begingroup$ Integrals of scalar-valued functions are hard to conceptualize visually in the general case because your region of integration, at least when first learning multivariable calculus, is almost universally represented in 3-space, which doesn't really leave room to graph that 4th dimension (the output of the scalar-valued function). If your region of integration is in the $xy$-plane however, and you're integrating something of the form $f(x,y)$, you can think of the output of your integral as the volume (or surface area) "below" $z=f(x,y)$. $\endgroup$ – Pockets $\begingroup$ @MSIS: You need to clarify exactly what kind of object you're integrating. Let $\mathbf{F}$ be a vector field on a region of $\mathbb{R}^n$. You can integrate the dot product $\mathbf{F}\cdot\mathbf{dr}$ around a curve $c$ contained in that region; this is called a "line integral" of $\mathbf{F}$ along the curve $c$. You can also integrate $\mathbf{F}\cdot\mathbf{dA}$ along a surface $S$, where $\mathbf{dA}$ is related to the normal vector of the surface; this is called a "flux integral." These are different concepts than integrals of plain-old real-valued functions along a curve or surface. $\endgroup$ There seems to be some confusion here, so first I'll answer the questions and then explain the reasoning. Is a line integral the arc length along a surface, and a surface integral is the surface area? Why is a line integral equal to 0 on a closed path? It isn't. (I've included the original text of your question instead of the edited text here for the sake of the explanation). Now that that's out of the way, let's start by thinking about the concepts at hand here. First, an integral. An integral, is, at its core, a summation over a region. Take, for example, $\int_a^b f(x)dx$. Put crudely, it gives you the signed area "under" $f(x)$ by summing up the "heights" of $f(x)$ along the portion of the $x$-axis between $x=a$ and $x=b$. What, then, are line integrals and surface integrals? These are integrals described by the region over which they are integrating. In the example $\int_a^b f(x)dx$, the region of integration is the $x$-axis from $x=a$ to $x=b$. In the case of a line integral, the region of integration is a curve (yes, this does seem like a bit of a misnomer; it's a rather unfortunate consequence of the new meanings that "linearity" acquires as the level of the math at hand gets higher - for your purposes, thinking of a curve as a twisted line may be the most intuitive method); in the case of a surface integral, the region of integration is a surface. The line integral and surface integral of $1$ do indeed give you, respectively, the length of the curve and the surface area of the surface that serve as the regions of integration. But there is no such thing as "the arc length along a surface" (there is such a thing as the length of the boundary of a surface, if said boundary exists of course, but arc length is a characteristic of 1-dimensional objects, which surfaces are not), and you should NEVER confuse the idea of the integral of a constant function with the integral itself; nor should you ever confuse the concept of a surface area integral with a surface integral. Integrals are tools used to study how functions behave in certain regions; line and surface integrals are no exception, they just refer to integrals on specific types of regions. Line and surface integrals which give you arc lengths and surface areas are a more specific subset of line and surface integrals. Now, going back to: As Ayesha explained in the comments on your post (with reference to, I should note, concepts a bit beyond the scope of your question, e.g. compact domain), "that's the case only if the function in question is conservative." To expound, and to be a bit more precise, that's only if you're taking the line integral of a conservative vector-valued function (also referred to as a conservative vector field). I can't think of any good, accurate way to recognize, intuitively, whether or not a vector field is conservative (at least by looking at a graphical representation of the vector field) besides saying that there are "rules" governing the distribution of the vectors - but this is a very, very vague and poor description that applies to both non-conservative and conservative vector fields. Perhaps another commenter could offer further insight into this. The intuitive understanding of why a line integral of a conservative vector field is 0 on a closed path, however, is basically that "summing" up the vectors along a closed path gives you the zero vector; one good example is that of gravity - "what goes up must come down". If you throw an apple up in the air and then catch it exactly where you threw it up, although it's moved, it's returned to its original position (and has the same gravitational potential energy now as it did immediately before you threw it up). PocketsPockets Line Integral: It is a curved domain! For instance, if you are integrating a function along x-axis, it is a straight line. Here you are integrating the function which is taking values along this straight line (x-axis). Now consider a semi-circle with it's centre at the origin. If you want to integrate the function along this semicircle, you basically have to sum the values the function is taking on the points along the semi-circle. Integration is just a summation of values (roughly). Now next question is what values. Say $f(x,y) = 1+x^3+y^3$, now if you go along x-axis from -1 to 1, the function values will be different than going along a semicircle of unit radii with origin at center of axis system. Say a mild storm is taking place in your neighbourhood, if you go against the storm you have to work hard, if you go along the storm - you hardly have to do any work. Now considering the work as a function, you can see it depends upon the path you are taking. Same thing here, because my function takes on values all over the $x-y$ plane (let's assume that), you may be interested to integrate along a particular path which may be curved or whatever the case may be. Line integral along closed curve is not always zero. It will be zero when the function is conservative, which means if an arbitrary vector function $f$ can be expressed as $\vec f = \nabla \phi=\frac {\partial \phi }{\partial x}\vec i + \frac {\partial \phi }{\partial y} \vec j$ considering in $x-y$ plane, where $\phi$ is function of $x,y$. Now $\oint_c \vec f.\vec dr= \oint_c \nabla \vec \phi. \vec dr = \oint_c (\frac {\partial \phi }{\partial x}dx + \frac {\partial \phi }{\partial y}dy)= \oint_cd \phi$ - from here on you try to get the further steps and prove that it is indeed zero. (Hint: use total derivative theorem and assume $\phi$ is function of $x,y$) Surface Integral: This corresponds to 3D surfaces, and the logic flows in the same way as of line integral, here you want to integrate the function values takes at points on the surface. For vector fields it is the flux that you would be integrating. You can think of a line integral as representing the work done by a force, if you've encountered that concept in earlier studies of calculus. In this case, a particle is moving along the curve $C$ (the one you're integrating over) pushed along by a force of magnitude $f(\textbf{x})$. You are basically integrating the dot product of the unit tangent vector of $C$ with a vector $f(\textbf{x})$, which can be taken to measure how much the path $C$ "lines up" with the vector field. Surface integrals are basically a generalization of this concept to three dimensions - here, a surface takes the place of the linear path in a line integral. It represents the ${\it flux}$ through the surface. AyeshaAyesha $\begingroup$ Thank you all for your answers! $\endgroup$ Not the answer you're looking for? Browse other questions tagged multivariable-calculus intuition or ask your own question. Interpreting Line Integrals with respect to $x$ or $y$ Circulation and line integrals. Interpreting path independent line integrals in terms of work done What's the point of the fancy notation for surface integrals and line integrals? Vector fields, line integrals and surface integrals - Why one measures flux across the boundary and the other along? Line integral for surface area Closed curve line integral over conservative field not equal to $0$? Intuition of the Surface Integral of a Real-Valued Function Why is the Volume integration & Surface Area integration of a sphere different?
CommonCrawl
A numerical investigation of high-resolution multispectral absorption tomography for flow thermometry Weiwei Cai1 & Clemens F. Kaminski1 Applied Physics B volume 119, pages29–35(2015)Cite this article Multispectral absorption tomography (MAT) is now a well-established technique that can be applied for the simultaneous imaging of temperature, species concentration, and pressure of reactive flows. However, only intermediate spatial resolution, on order of 15 × 15 grid points, has so far been achievable in previous demonstrations. The aim of the present work is to provide a numerical validation of our MAT algorithm for thermometry of combusting flows, but with greatly improved spatial resolution to motivate its experimental realization in practical environments. We demonstrate a grid resolution that is comparable to that of classical absorption tomography (CAT) containing 80 × 80 elements from only two orthogonal projections, which is impractical to realize with CAT but especially desirable for applications where optical access is limited. This is achieved using the smoothness assumption, which holds true under most combustion conditions. The study shows that better spatial resolution can be obtained through a simple increase in the spatial sampling frequency for the two available projections, as the smoothness condition becomes more reliable on smaller spatial scales. Our work also demonstrates the first application of MAT for full volumetric reconstructions. The studies thus provide robust guidelines for the implementation of MAT over large spatial scales and lay solid foundations for its development and application in complex technical combustion scenarios, where spatial resolution is crucial to investigate the interaction of flow phenomena with chemical reactions. Optical imaging techniques are indispensable for the resolution of non-uniformities in technical flow fields [1, 2]. Generally the techniques can be divided into two categories, which are planar imaging techniques on the one hand and tomography, on the other. As evident from its name, the former category, which includes planar laser-induced fluorescence/phosphorescence [3, 4] and Raman/Rayleigh imaging [5–7], is two-dimensional in nature and requires a pulsed laser source for the selective illumination of the plane of interest. The signal stems from the interaction of the illumination light field with the gas molecules, and the generated light emission is imaged directly onto a two-dimensional detector array, typically a camera. Mathematically speaking, planar imaging techniques can be considered as a straightforward linear field mapping operation. On the other hand, tomography relies on the mapping of integrals of the target field along the line of sight, LOS, which in what follows we refer to as integral mapping [8]. As a consequence, the target field has to be integratable along the LOS and the corresponding integrals have to be physically meaningful. Since emission along the LOS is accumulative and hence integratable along its path toward the detection plane, essentially all planar imaging methods can be upgraded into 3D tomographic modalities. In practice, limitations are only set by the available optical access to the system under study and the excitation power available for volumetric sample illumination. In contrast to planar imaging, which solely targets on emission fields, tomography can also recover the fields for absorption coefficients, which can be further processed to retrieve other fundamental gas properties, such as temperature, species concentration, and pressure [9–14]. Compared with emission tomography, the absorption counterpart enjoys further advantages such as being calibration free, species selective, and highly sensitive [15–20]. Tomographic absorption techniques can be further classified into classical and nonlinear methods depending on the sampling schemes and mathematical algorithms employed [8]. Classical absorption tomography, CAT, can only handle a single absorption transition in the inversion process, and thus, extensive sampling is required to obtain the number of projections (the sets of line-of-sight measurements along specified orientations) necessary for high-resolution reconstruction. Such a sampling scheme usually requires mechanical means for the angular displacement of the projections, which inevitably undermines the temporal resolution, making it unsuitable for rapidly evolving turbulent flows. Since CAT results in a set of linear equations, we also refer to it as linear tomography. On the other hand, nonlinear methods are inherently multiplexed: For example, the MAT technique reported previously measures several spectral sampling frequencies simultaneously but on the other hand requires only two spatial projections, and it is thus inherently faster than CAT [12, 15]. It was demonstrated before that with two fixed orthogonal projections, a faithful tomographic reconstruction can be obtained for a sample containing 15 × 15 resolution elements at MHz repetition rates [12, 15]. The geometric arrangement of a typical MAT experimental setup with two orthogonal projections is illustrated in Fig. 1. A schematic comparison of the sampling schemes used for CAT and MAT is summarized in Fig. 2. As illustrated, three measurement dimensions are involved, which are: angular (i.e., # of projections), lateral (i.e., # of sampling beams within a specific projection), and spectral dimension, respectively. The relative lengths of the axes indicate the intensiveness of the sampling operation along a specific dimension. Geometric arrangement of a typical MAT experimental setup Schematic comparison of the dimensionality of sampling schemes for, a classical absorption tomography and b multispectral absorption tomography. The relative lengths of the axes indicate the intensiveness of the sampling process along a specific dimension However, since only two projections were used in previous implementations of MAT and the spectral sampling could only partially compensate (in a mathematical sense) for spatial sampling deficiencies, the spatial resolution is inevitably undermined. Nevertheless, we point out that there is no limitation per se in the number of projections that can be accommodated by the MAT algorithm, however, at a commensurate loss in temporal resolution due to beam scanning requirements. But in this case, MAT enjoys a much improved reconstruction fidelity due to better immunity against experimental noise such as originating from beam steering, window fouling, and etalon fringing [15]. Moreover, since MAT can be combined with advanced detection techniques, such as wavelength modulation spectroscopy (WMS), it can be used for high- and/or varying pressure scenarios, for which CAT is not optimally suited. In summary, the nonlinear MAT technique offers full flexibility, to either be deployed with high temporal but somewhat limited spatial resolution, as demonstrated in an earlier article, or, as we demonstrate here, at very high spatial resolution at the cost of increased data acquisition requirement. We demonstrate this effect by increasing the spatial sampling frequency along each projection direction. This finer meshing of the flow filed is permissible under the assumption of the smoothness condition, which is valid for most combustion environments practically encountered. We demonstrate a resolution spatially for grid sizes containing 80 × 80 elements to achieve a resolution that is comparable to that of CAT although requiring only a fraction of projections. For high-resolution MAT, the temporal resolution is furthermore only limited by the bandwidth of the data acquisition system. High-resolution MAT has thus the practical potential for deployment in situations where both spatial and temporal resolutions are crucial, for example to provide full resolution of complicated flow fields such as supersonic combustion systems within ramjets/scramjets [21]. So far, both CAT and MAT have been limited to applications in two dimensions, simply because of prohibitive experimental costs for the enabling technology. However, the potential for the implementation of MAT with inexpensive tunable diode lasers with coarse wavelength-division multiplexing (CWDM) [15] has made volumetric absorption tomography a more practical proposition. This provides us with a strong motivation to further develop this technique theoretically and perform numerical validation studies in preparation for experimental demonstrations in the near future. The remainder of this paper is organized as follows: Sect. 2 briefly introduces the mathematical formulation for the MAT algorithm; Sects. 3 and 4 present studies for large-scale planar and volumetric implementations of the technique, and the final section provides a summary of our findings. Mathematical formulation of MAT The mathematical formulation of MAT using both direct absorption spectroscopy (DAS) and WMS has been detailed in previous publications [15, 22]. To facilitate the discussion here, we focus on MAT implementations based on DAS as an example and briefly summarize the formulation to use MAT for flow thermometry. According to the Beer–Lambert law, the absorbance, labeled as α, for monochromatic light of a frequency v passing through non-homogeneous absorbing medium is defined as: $$\alpha (\nu ) = \int\limits_{{L_{1} }}^{{L_{2} }} {\sum\limits_{g} {S[T(l),\nu_{g} } } ] \cdot \phi [T(l),X(l),P,(\nu_{g} - \nu )] \cdot P \cdot X(l)$$ where L 1 and L 2 the intersections between the laser beam and the boundaries of the region of interest, ROI; T(l) and X(l) the temperature and concentration profiles along the LOS as a function of distance, l; P the pressure; ϕ the normalized Voigt line-shape function, which approximates the convolution of the two dominant broadening mechanisms, i.e., Doppler and collisional broadening, representative for typical combustion scenarios; and S[T(l), ν g ] is the line strength of the gth non-negligible transition centered at ν g and for prevailing local temperature T(l). In practice, the ROI is represented by discretized pixels, and the integration in the equation is replaced by a summation operation. In MAT, the left-hand side of Eq. (1) can be obtained from experimental measurements and the right-hand side can be predicted using Beer's law. By taking LOS measurements at different lateral positions and orientations across the sample for multiple absorption transitions, which is achieved by coarse/dense wavelength-division multiplexing (CWDM/DWDM), the parameters are obtained for a set of nonlinear equations, whose common variables are the profiles of temperature and species concentration. Here we assume that the pressure is constant, which is the case for many practical applications. Currently, the standard way of solving the MAT equation system is through iterative optimization via minimization of a cost function, defined as $$D = \sum\limits_{j = 1}^{J} {\sum\limits_{i = 1}^{I} {\left[1 - p_{c} (\ell_{j} ,\nu_{i} ,T_{q} ,X_{q} )/p_{m} (\ell_{j} ,\nu_{i} )\right]^{2} } }$$ where I denotes the total number of peak wavelengths used; J the number of sampling laser beams within ROI; p m (l j , ν i ) the LOS measurements at the frequency ν i along the jth laser beam; and p c (l j , ν i ) the corresponding predictions using Beer's law. The cost function, D, provides a quantitative measure of the difference between the fitted and measured projections. In the ideal case, where measurements are noise-free and the spectroscopic database is accurate, D reaches its global minimum (zero) when the reconstructions match the true profiles. However, in reality due to the noisy projections, numerous local minima that have values close to the global minimum would lead to solutions that are significantly different from the true profiles. In this case, additional constraints such as smoothness conditions due to thermal and mass diffusion can be incorporated into the inversion process to rule out the solutions that disagree with the constraints so that a smooth solution which serves as a good approximate of the true profiles can be reached. For example, the smoothness of temperature within the ROI can be implemented as: $$R_{T} \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{T}^{\text{rec}} } \right) = \sum\limits_{m = 1}^{M} {\sum\limits_{n = 1}^{N} {\left[ {\sum\limits_{i = m - 1}^{m + 1} {\sum\limits_{j = n - 1}^{n + 1} {\left( {T_{i,j}^{\text{rec}} - T_{m,n}^{\text{rec}} } \right)} } / 8} \right]} }$$ where \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{T}^{\text{rec}}\) stands for the reconstructed temperature distribution; M and N the number of square pixels along the x and y directions, respectively; the subscript m, n run over all inner pixels within the ROI; and the subscripts i, j enumerate the immediate adjacent pixels to every pixel specified by m and n. Obviously, R T decreases as the distribution becomes smoother, and for a constant temperature field, R T would approach zero. The cost function can thus be modified as $$F = D + \gamma_{T} \cdot R_{T} (T^{rec} )$$ where γ T is a weighting parameter to regulate the relative significance of a priori (smoothness of the temperature profile) and posteriori (measured projections) information. Since it was previously demonstrated that the recovery of temperature information in the MAT process is only weakly affected by local concentration variations [23], we omit the latter in the formulation for F. However, it has to be pointed out that both the temperature and concentration profiles were set as independent free variables during the fitting process. The minimization problem can then be solved using a global minimizer, e.g., the simulated annealing (SA) algorithm [24], which we use here. Large-scale planar MAT For the study of large-scale planar MAT problems, we artificially generated smooth, but multi-modal 2D phantoms of flame temperature and water vapor concentration as shown in Fig. 3 in order to simulate practical flow conditions. Water vapor was chosen as the target species due to its relatively strong absorption in the near-infrared spectral region compared with other flame species and also due to its abundance in hydrocarbon/hydrogen flames. For practical applications, the continuous phantoms are meshed by a grid containing N × M pixels. Here we vary N and M to study the effect of variations in the spatial sampling frequency along the projections on the MAT reconstruction quality. Typically both forward and backward projection processes were considered, i.e., going from phantom to projections and vice versa. In the forward process, the noise-free projections were modeled according to Beer's law assuming that an accurate spectroscopic database had been used. A specified level of noise was then added to simulate practical projections suffering from noise (posterior information). In the backward process, a large number of trial solutions were generated within constraints to match the fitted projections calculated via Eq. (1) to the 'measured' projections, while at the same time, ensuring the smoothness condition was met (a priori information). The optimal solution was then taken to be that which struck the best balance between a priori and posterior constraints. To quantify reconstruction quality, an overall average error was defined as follows $$e_{T} = ||\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup {\text{rec}}}$}}{T} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup {\text{true}}}$}}{T} ||_{1} /||\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup {\text{true}}}$}}{T} ||_{1}$$ where \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup {\text{true}}}$}}{T}\) is the temperature phantom arranged in a vector form and \(\left\| {} \right\|_{1}\) denotes the Manhattan norm of a vector. Continuous phantoms generated to simulate temperature and water vapor concentrations in a combusting flow Figure 4 shows the reconstruction results for a typical case with N = M=60. Two orthogonal projections, each containing 60 laser beams, were assumed so that each projection passed through each grid pixel once. Twenty strong H2O absorption transitions in the 1,350- to 1,500-nm spectral range were selected for this demonstration. Gaussian noise was added at 5 % of the peak value to each projection to simulate the realistic experimental conditions. Thus ca ~2,000 nonlinear equations were obtained from the simulated measurements. For the case where we modeled the system to contain two unknown variables (i.e., temperature and water vapor concentration) for each pixel, an equation system containing ca ~7,000 variables was obtained. The results for the final reconstructed temperature profiles are shown in Fig. 4, left panel. Clearly, the major features of the systems are recovered with good resolution, e.g., the twin peak system of the temperature profile. The average error obtained is only 1.39 % (~25 K). It has to be noted that the lower reconstruction quality on the edge is due to the relatively weak enforcement of the regularization since the pixels on the edge have fewer adjacent pixels available for smoothing. The right panel in Fig. 4 shows the evolution of the terms contributing to the cost function as the SA algorithm progresses, i.e., D and γ T ·R T (T rec) normalized by the initial value of F. As can be seen already halfway through the fitting process, the cost function (green symbols) has essentially reached its minimum, indicating a close resemblance between fitted and "measured" projections. On the other hand, the regularization term (red symbols) continues to decrease until the termination criterion is satisfied in the procedure. This means there were numerous temperature profiles that lead to the similar "measured" projections, but only the smooth profiles result in a small regularization term. The procedure guarantees a smooth solution as the best approximation to the true phantom. The evolution of both the fitted temperature profile and the contribution terms in the cost function guided by the SA algorithm can be found in Media 1. An example reconstruction using a discretization of 60 × 60 grid points. See also Media 1 for an animation for the evolution of the reconstructed temperature distribution and the cost function as the Simulated Annealing algorithm progresses. 5 % Gaussian noise was added to all projections to simulate realistic noise conditions as may be encountered in practical experiments Figure 5 shows the reconstruction error of temperature (e T ) and the corresponding computational time as a function of meshing scale. The same simulation conditions, i.e., the number of projections, transitions, and noise levels in the "measurements" were used as for the case shown in Fig. 4. To make the comparison meaningful, all simulations were run on the same Intel Core i7-4770 Processor. The error bars here indicate the standard deviation of 30 cases using projections with the same noise levels. It is seen that the computational time increases almost exponentially with meshing frequency which is explained by the fact that the number of free variables in the optimization is 2 × N×M. Furthermore, e T decreases with meshing grid sizes up to 50 × 50 and increases thereafter. This is due to the fact that the smoothness condition is better satisfied for finer discretization; however, as the meshing scale is increased beyond a critical value, the ratio between the number of variables and nonlinear equations (N/I) also increases, thus reducing reconstruction quality. The overall reconstruction fidelity represents an amalgamation of both factors. For small meshing scales, the smoothness condition will outweigh the effect of increasing N/I and vice versa. Reconstruction error of temperature profile and corresponding computational time as a function of meshing scale. 5 % Gaussian noise was added to all projections Volumetric MAT So far, MAT, and indeed CAT, has been limited to 2D situations simply due to prohibitive experimental costs. Fortunately, recent advances in MAT with inexpensive tunable diode lasers have made the experimental realization of volumetric absorption tomography a more realistic proposition. This is what encourages us to perform preliminary numerical studies here in preparation for later experimental demonstrations. We note that volumetric MAT implementation is a worthwhile endeavor not only from an application point of view (i.e., providing full 3D information for the system under study), but it brings advantages also for tomographic inversion process, because extra information gained along directions between adjacent planes offers further constraints (e.g., smoothness between the layers) that make the method even more robust. Here we test the feasibility of such a volumetric tomography approach with MAT. We thus generated a 3D phantom as shown in Fig. 6 in such a way that the smoothness condition could be applied not only within, but also between the layers. The ROI was discretized into 15 × 15 × 15 voxels, and thus, there are totally ~7 k variables (i.e., 3,375 for temperature and 3,375 for species concentration). Again the same simulation conditions were used as in the planar MAT cases. Two orthogonal projections were assumed, with 450 beams thus resulting in exactly 9,000 nonlinear equations. Accordingly, Eq. (2) was modified to accommodate measurements along the third dimension as: $$D = \sum\limits_{k = 1}^{K} {\sum\limits_{j = 1}^{J} {\sum\limits_{i = 1}^{I} {\left[ {1 - p_{c} (\ell_{j,k} ,\nu_{i} ,T_{q} ,X_{q} )/p_{m} (\ell_{j,k} ,\nu_{i} )} \right]^{2} } } }$$ where K indicates the total number of layers. Volumetric temperature phantom using a discretization of 15 × 15 × 15 In addition, to take full advantages of connections between layers, Eq. (3) was rewritten as: $$R_{T} \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{T}^{rec} } \right) = \sum\limits_{k = 1}^{K} {\sum\limits_{m = 1}^{M} {\sum\limits_{n = 1}^{N} {\left[ {\sum\limits_{g = k - 1}^{k + 1} {\sum\limits_{i = m - 1}^{m + 1} {\sum\limits_{j = n - 1}^{n + 1} {\left( {T_{g,i,j}^{rec} - T_{k,m,n}^{rec} } \right)} } } / 2 6} \right]} } }$$ Figure 7 shows an example reconstruction of the volumetric temperature field for the phantom. The average error e T is stated for each recovered layer in the figure panels. Remarkably, a good reconstruction fidelity was obtained with an overall errors e T of <2.6 % (~40 K) throughout. Reconstructed volumetric temperature distributions. Average temperature errors e T are indicated for each reconstructed layer In summary, we present the numerical studies of nonlinear MAT for large computational mesh sizes. We demonstrate a spatial resolution with meshes containing up to 80 × 80 grid points for planar MAT requiring just two orthogonal projections. The spatial resolution obtained is comparable with that of CAT, which needs many more projections. Even better spatial resolution is in principle possible at the expense of increased computational cost. We show that reconstruction fidelity is improved simply by increasing the spatial sampling frequency along the available orthogonal projections before the effect of increasing N/I (the ratio between the number of variables and nonlinear equations) outweigh the benefit of smoothness condition. However, it has to be pointed out that for more complicated turbulent flow fields featuring sharper gradients and more intense fluctuations, more projections are necessary to resolve all information on relevant spatial scales. In theory, resolution can be improved by using more projections, but in practice, the achievable resolution is dictated by the maximum number of projections available with limited optical access and the optimal signal noise ratios that are possible. An advantage of the MAT algorithm is that it features a better noise immunity when the same number of projections was used as CAT against experimental noise originating from beam steering, window fouling, and etalon fringing [15]. Finally, we demonstrate that full volumetric MAT is a feasible and realistic proposition for experimental realization. The availability of cost efficient laser and detector technologies mean that full 3D reconstructions of dynamic combusting flows will soon become a reality. M.A. Linne, Spectroscopic measurement: an introduction to the fundamentals (Academic Press, London, 2002) K. Kohse-Höinghaus, J.B. Jeffries, Applied combustion diagnostics (Taylor & Francis, New York, 2002) B. Peterson, E. Baum, B. Böhm, V. Sick, A. Dreizler, P. Combust, Inst. 34, 3653 (2013) M. Mosburger, V. Sick, M. C. Drake, Int. J. Engine Res., 1468087413476291 (2013) U. Doll, G. Stockhausen, C. Willert, Exp. Fluids 55, 1 (2014) D. Hoffman, K.-U. Münch, A. Leipertz, Opt. Lett. 21, 525 (1996) G. Kathryn, F. Frederik, and S. Jeffrey, in 51st AIAA Aerospace sciences meeting including the new horizons forum and aerospace exposition (American Institute of Aeronautics and Astronautics, 2013) W. Cai, C. F. Kaminski, in nonlinear tomography: a new imaging concept, 2014 (Optical Society of America), p. LM1D. 5 J. Li, Z. Du, T. Zhou, and K. Zhou, in numerical investigation of two-dimensional imaging for temperature and species concentration using tunable diode laser absorption spectroscopy, 2012 (IEEE), p. 234 M.G. Twynstra, K.J. Daun, Appl. Opt. 51, 7059 (2012) J. Song, Y. Hong, G. Wang, H. Pan, Appl. Phys. B 112, 529 (2013) W. Cai, C.F. Kaminski, Appl. Phys. Lett. 104, 034101 (2014) A. Guha, I.M. Schoegl, J. Propul. Power 30, 350 (2014) M. Wood, K. Ozanyan, IEEE Sens. J. 15, 545 (2015) J. Hult, R.S. Watt, C.F. Kaminski, Opt. Express 15, 11385 (2007) C. Kaminski, R. Watt, A. Elder, J. Frank, J. Hult, Appl. Phys. B 92, 367 (2008) J. Langridge, T. Laurila, R. Watt, R. Jones, C. Kaminski, J. Hult, Opt. Express 16, 10178 (2008) T. Laurila, I. Burns, J. Hult, J. Miller, C. Kaminski, Appl. Phys. B 102, 271 (2011) R.S. Watt, T. Laurila, C.F. Kaminski, J. Hult, Appl. Spectrosc. 63, 1389 (2009) J. Hu, W. Bao, J. Chang, J. Propul. Power 30, 1103 (2014) W. Cai, D.J. Ewing, L. Ma, Comput. Phys. Commun. 179, 250 (2008) L. Ma, W. Cai, Appl. Opt. 47, 4186 (2008) W. Cai, L. Ma, Comput. Phys. Commun. 181, 11 (2010) This work was funded by the European Commission under Grant No. ASHTCSC 330840 and was partly performed using the Darwin Supercomputer of the University of Cambridge High Performance Computing Service. Clemens F. Kaminski also wishes to acknowledge EPSRC for funding (Grant EP/L015889/1). Department of Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, CB2 3RA, UK Weiwei Cai & Clemens F. Kaminski Search for Weiwei Cai in: Search for Clemens F. Kaminski in: Correspondence to Clemens F. Kaminski. Supplementary material 1 (MP4 2998 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Cai, W., Kaminski, C.F. A numerical investigation of high-resolution multispectral absorption tomography for flow thermometry. Appl. Phys. B 119, 29–35 (2015) doi:10.1007/s00340-015-6012-5 Reconstruction Quality Smoothness Condition True Profile Spectral Sampling Wavelength Modulation Spectroscopy
CommonCrawl
Enhancement of ascomycin production via a combination of atmospheric and room temperature plasma mutagenesis in Streptomyces hygroscopicus and medium optimization Zhituo Yu1,2, Xiaofang Shen2, Yuanjie Wu2, Songbai Yang2, Dianwen Ju1 & Shaoxin Chen2 Ascomycin, a key intermediate for chemical synthesis of immunosuppressive drug pimecrolimus, is produced by Streptomyces hygroscopicus var. ascomyceticus. In order to improve the strain production, the original S. hygroscopicus ATCC 14891 strain was treated here with atmospheric and room temperature plasma to obtain a stable high-producing S. hygroscopicus SFK-36 strain which produced 495.3 mg/L ascomycin, a 32.5% increase in ascomycin compared to the ATCC 14891. Then, fermentation medium was optimized using response surface methodology to further enhance ascomycin production. In the optimized medium containing 81.0 g/L soluble starch, 57.4 g/L peanut meal, and 15.8 g/L soybean oil, the ascomycin yield reached 1466.3 mg/L in flask culture. Furthermore, the fermentation process was carried out in a 5 L fermenter, and the ascomycin yield reached 1476.9 mg/L, which is the highest ascomycin yield reported so far. Therefore, traditional mutagenesis breeding combined with medium optimization is an effective approach for the enhancement of ascomycin production. Ascomycin, a 23-membered macrocyclic polyketide (Additional file 1: Figure S1), was produced by Streptomyces hygroscopicus var. ascomyceticus ATCC 14891 strain (Bérdy 2005). Ascomycin is pharmacologically important, because it has the same macrolide structure as tacrolimus (Qi et al. 2014). Therefore, ascomycin has been highly valued in various clinical applications (Bugelski et al. 2010; Ianiri et al. 2017; Taieb et al. 2013). In particular, a semisynthetic derivative of ascomycin called pimecrolimus has been used as the first-line treatment for mild-to-moderate atopic dermatitis and plays an important role in the market of immunosuppressive drugs (Bardur et al. 2015; Ho et al. 2003). Due to its complex macrolide structure, ascomycin is difficult to synthesize by chemical methods, and thus is mainly produced by microbiological fermentation (Xin et al. 2015). However, the yield of ascomycin produced via microbiological fermentation is still low and the production costs are high. Recently, various efforts have been made to improve ascomycin yield through genetic manipulation. For example, the overexpression of some key genes involved in ascomycin biosynthesis, such as hcd, ccr, fkbR1, and fkbE, led to a marked increase in ascomycin yield (Song et al. 2017; Wang et al. 2017). In addition, an engineered S. hygroscopicus strain with increased chorismatase (FkbO) activity and inactivated pyruvate carboxylase (Pyc), named TD-ΔPyc-FkbO, showed the highest reported ascomycin yield to date, 610.0 mg/L (Qi et al. 2017). Nonetheless, this yield is considered low, as it is not high enough to meet the demands, and the lack of complete genomic information for S. hygroscopicus limits further modifications of the strain by genetic manipulation (Wu et al. 2000). Traditional mutagenesis, and especially some new mutagenesis methods, is an effective and rapid way to enhance the production of secondary metabolites (Sivaramakrishnan and Incharoensakdi 2017; Zhang et al. 2016a). Recently, atmospheric and room temperature plasma (ARTP), which induces mutations at a higher rate than ultraviolet irradiation and N-methyl-N′-nitro-N-nitrosoguanidine mutation, proved to be an efficient tool to generate stable high-yield mutant strains for microorganism breeding (Ottenheim et al. 2018; Ren et al. 2017). ARTP mutagenesis has been combined with diethyl sulfate treatments for the improvement of arachidonic acid production (Li et al. 2015). Additionally, the production of transglutaminase was significantly enhanced in Streptomyces mobaraensis by iterative mutagenesis breeding with ARTP (Jiang et al. 2017). It is necessary to optimize the composition of the culture medium to further enhance ascomycin production. However, in traditional 'one-factor-at-a-time' experiments, the effects of various factors can only be investigated one at a time, and this approach fails to evaluate multifactorial interactions among all components, thereby leading to inefficient and time-consuming work (Lee 2018). Response surface methodology (RSM) includes factorial designs and regression analysis for construction of empirical models, making it an excellent statistical tool for increasing the production of valuable metabolites (Chaudhary et al. 2017; Fu et al. 2016). RSM can evaluate all the factors simultaneously and determine the optimal culture conditions for microbes. Compared with traditional optimization, it can save a lot of time when employing RSM to obtain the desirable fermentation medium for ascomycin production. Herein, we obtained a high-yield S. hygroscopicus SFK-36 strain by ARTP-induced mutagenesis and then fermentation medium was optimized by RSM. Finally, scale-up fermentation showed significantly improved ascomycin production by SFK-36. Strains and primers Streptomyces hygroscopicus ATCC 14891 strain was purchased from ATCC. S. hygroscopicus SFK-36 is a mutant of ATCC 14891 strain. Primers used in this study are listed in Additional file 1: Table S1. Culture conditions for S. hygroscopicus The S. hygroscopicus strain was inoculated onto a slant medium (soluble starch, 10.0 g/L; yeast extract, 4.0 g/L; K2HPO4, 0.5 g/L; MgSO4·7H2O, 0.5 g/L; and agar, 20.0 g/L; pH 7.2) at 28 °C for 7 days to harvest spores. The spores were inoculated into 20 mL of the seed medium (corn steep liquor, 8.0 g/L; glucose, 10.0 g/L; cottonseed meal, 3.0 g/L; and KH2PO4, 1.0 g/L; pH 7.0) in a 250 mL flask incubated at 28 °C and 200 rpm. Then, the 10% (v/v) seed culture was transferred into 250 mL flasks containing 25 mL of the fermentation medium and incubated at 28 °C and 200 rpm. The original fermentation medium (soluble starch, 20.0 g/L; dextrin, 40.0 g/L; yeast powder, 5.0 g/L; peptone, 5.0 g/L; corn steep liquor, 5.0 g/L; K2HPO4·3H2O, 1.0 g/L; (NH4)2SO4, 1.5 g/L; MnSO4·H2O, 0.5 g/L; MgSO4·7H2O, 1.0 g/L; CaCO3, 1.0 g/L; and soybean oil, 1.0 g/L; pH 6.5) was previously described (Song et al. 2017). Dextrin (dextrose equivalent value 10–15) used in this study was purchased from Shandong Xiwang Co., Ltd., China. Each experiment was repeated three times, and the error bar was used to indicate the standard deviations (SDs). ARTP mutagenesis The original ATCC 14891 strain was treated with ARTP Mutagenesis Breeding Machine (ARTP-M, Wuxi TMAXTREE Biotechnology Co., Ltd., China) following a method reported previously (Ren et al. 2017), with some modifications. As shown in Fig. 1a, the spore suspension was first harvested from a fresh slant medium. Next, 10 μL of the spore solution (107 spores/mL) was placed onto a sterile stainless-steel plate and subjected to plasma irradiation. The working parameters were as follows: radiofrequency power input, 100 W; gas flow of pure helium, 10 L/min; effective distance between the plasma torch nozzle exit and the sample plate, 2 mm; temperature of the plasma jet, room temperature (20–25 °C); and treatment periods for the spores, 0, 30, 60, 90, 120, and 150 s. Afterwards, mutant strains were randomly selected and inoculated into the fermentation medium in shaking flasks. ARTP mutagenesis breeding for S. hygroscopicus ATCC 14891 strain. a The experimental procedure of high-yield strain screening by ARTP breeding system. b Lethality rate and positive mutation rate of ATCC 14891 strain after ARTP mutagenesis. c Ascomycin production of the isolated mutants after ARTP mutagenesis (the black solid line represented 373.8 mg/L—the production of original ATCC 14891 strain, while the red dotted line was for the yield of positive mutant strains, in which ascomycin production was found to be over 10% higher relative to ATCC 14891 strain). d The comparison of ascomycin production between SFK-36 and ATCC 14891 in shake flask. e Genetic stability of SFK-36 in ascomycin production by natural subculture The lethality rate was calculated as follows: $${\text{Lethality}}\;{\text{rate}}\;\left( \% \right) = \left[ {\left( {C - S} \right)/C} \right] \times 100\%$$ where, C is the total number of colony forming units (CFUs) of the spores without ARTP treatment, and S is the CFU number of the mutant strains after ARTP mutagenesis breeding. In addition, the positive mutation rate was determined as the following equation: $${\text{Positive}}\;{\text{mutation}}\;{\text{rate}}\;\left( \% \right) = \left( {P/S} \right) \times 100\%$$ where, P is the CFU number of mutants with greater than 10% increase in ascomycin production relative to ATCC 14891 strain, and S is the CFU number of the mutant strains after ARTP mutagenesis breeding. Verification of genetic stability by RAPD analysis Streptomyces hygroscopicus strains were grown in tryptic soy broth medium (TSB; Becton–Dickinson, Sparks, MD, USA) for 2 days at 28 °C and 200 rpm. Culture broth was then centrifuged at 8000×g for 5 min to collect mycelium for DNA isolation. The total genomic DNA of S. hygroscopicus strains was extracted using DNeasy Blood & Tissue Kit (Qiagen 69504, Germany) according to the manufacturer's instructions. Randomly amplified polymorphic DNA (RAPD) analysis was carried out with primers RAPD-1, RAPD-2 and RAPD-3 (Additional file 1: Table S1) to map the DNA fingerprint of the SFK-36 mutant strain according to the method reported previously (Martins et al. 2004; Sikora et al. 1997). Optimization by RSM and statistical design The experimental design of RSM was generated in statistical software Design Expert Version 8.0.6. A three-level method of Box–Behnken design included soluble starch (A), peanut meal (B) and soybean oil (C). All variables were set to three levels (− 1, 0, and 1) containing minimum, intermediate, and maximum values according to the preliminary experimental results (Table 1). A total of 17 factorial points were designed and used in the standard order (Table 2). Among these, 12 points represented different combinations of the experimental variables and five additional experimental setups were replications of the central point. Second-order polynomial Eq. (1) was employed to analyze the effects of independent variables on the response as follows (Ju et al. 2015): $$Y = \beta_{0} + \beta_{a} A + \beta_{b} B + \beta_{c} C + \beta_{ab} AB + \beta_{ac} AC + \beta_{bc} BC + \beta_{aa} A^{2} + \beta_{bb} B^{2} + \beta_{cc} C^{2}$$ where, Y is the response calculated; A, B and C are the coded values of independent variables; β0, βa, βb, βc, βab, βac, βbc, βaa, βbb and βcc are the constant regression coefficients of the model. Table 1 Independent variables and different levels used in RSM Table 2 Experimental design for media optimization Culture of S. hygroscopicus in a 5 L fermenter The S. hygroscopicus SFK-36 strain was cultured in 750 mL flasks containing 50 mL of the seed medium at 28 °C and 200 rpm. Next, the seed culture was inoculated (10%, v/v) into the 5 L fermenter containing 2.5 L of the fermentation medium. Fermentation was carried out at 28 °C for 192 h with aeration at 1.0 vvm (air volume/culture volume/min). The agitation rate was set automatically to maintain dissolved oxygen levels higher than 20% during the whole fermentation process, and pH was maintained at 6.5 using 1 mol/L ammonia and 2 mol/L H2SO4 starting from 72 h of fermentation. Analysis of transcriptional levels by quantitative RT-PCR Fermentation culture broth (10 mL) of the S. hygroscopicus ATCC 14891 and SFK-36 were centrifuged at 8000×g for 5 min to collect mycelium cultured for 3 days and 6 days, respectively. Then, the total RNA was obtained using the Ultrapure RNA Kit (CWBIO, Beijing, China) according to the manufacturer's instructions. The residual genomic DNA in the RNA sample was removed by RNase-free DNase I (Takara, Dalian, China). To verify elimination of the residual DNA, the primers 16S-RT-F/16S-RT-R (Additional file 1: Table S1) were used to amplify the total RNA. Reverse transcription-PCR (RT-PCR) analysis was performed with the PrimeScript™ RT Reagent Kit (Takara, Dalian, China) according to the instructions provided by the manufacturer. To determine the transcription units in ascomycin biosynthetic gene cluster, 10 pairs of primers (Additional file 1: Table S1) were used to amplify the cDNA of S. hygroscopicus ATCC 14891 strain, including fkbW-U-F/fkbW-U-R, fkbU-R2-F/fkbU-R2-R, fkbR2-R1-F/fkbR2-R1-R, fkbF-G-F/fkbF-G-R, fkbH-I-F/fkbH-I-R, fkbK-L-F/fkbK-L-R, fkbL-C-F/fkbL-C-R, fkbN-Q-F/fkbN-Q-R, fkbP-A-F/fkbP-A-R, fkbC-B-F/fkbC-B-R. The corresponding PCR products were individually identified by sequencing. To analyze the transcriptional levels in S. hygroscopicus ATCC 14891 and SFK-36, quantitative real-time polymerase chain reaction (qRT-PCR) was conducted with TB Green™ Premix Ex Taq™ II (Takara, Dalian, China) according to the manufacturer's instructions. Seven primer pairs were used to determine the transcriptional levels of ascomycin biosynthetic gene cluster, including fkbW-RT-F/fkbW-RT-R, fkbU-RT-F/fkbU-RT-R, fkbR1-RT-F/fkbR1-RT-R, fkbE-RT-F/fkbE-RT-R, fkbB-RT-F/fkbB-RT-R, fkbO-RT-F/fkbO-RT-R, fkbS-RT-F/fkbS-RT-R, 16S-RT-F/16S-RT-R (Additional file 1: Table S1). All the amplicons were confirmed by sequencing. The transcriptional levels were normalized using gene 16S rRNA as the internal control (Wang et al. 2018). The fold changes of test genes were quantified using 2−ΔΔCt method (Livak and Schmittgen 2001). Each qRT-PCR experiment was performed for three times, and the error bar was used to show the standard deviations (SDs). To extract ascomycin from the fermentation broth sample, the cultures were diluted with four volumes of acetone and subjected to ultrasonication (50 Hz) for 20 min and then centrifuged at 13,780×g for 3 min. After centrifugation, the supernatant was filtered through a 0.22 μm filter and then analyzed by high-performance liquid chromatography (HPLC 1260 instrument, Agilent, USA) on a Hypersil BDS C18 column (5 μm, 4.6 × 150 mm) with monitoring at 210 nm. The mobile phase was deionized water with acetonitrile (35:65, v/v) at a flow rate of 1.0 mL/min at 55 °C. Ascomycin standard (Bioaustralis Fine Chemicals) was used as a control to make standard curves for quantitative analysis. The biomass of each fermentation sample was determined as packed mycelium volume (PMV). Briefly, culture broth (1.5 mL) was centrifuged at 13,780×g for 5 min to measure PMV. The total residual sugar in the culture broth was determined by a concentrated sulfuric-acid method as previously reported (Du et al. 2017). ARTP mutagenesis and strain screening To determine the optimal treatment time for ARTP mutagenesis, the lethality and positive mutation rates of different treatment time (0, 30, 60, 90, 120, and 150 s) were determined. When ATCC 14891 strain was treated with the plasma for 120 s, the lethality rate of ARTP mutagenesis reached 99.5% and the highest positive mutation rate of 11.6% was obtained (Fig. 1b). Therefore, the optimal operating time of ARTP breeding for ATCC 14891 strain was set to 120 s. Finally, 100 mutant strains were selected for the assessment of ascomycin production in flask culture. As presented in Fig. 1c, a total of 12 positive mutants were obtained. Of these, SFK-36 showed the highest ascomycin yield (495.3 mg/L), a 32.5% increase as compared with the original strain (373.8 mg/L). The time course of ascomycin production clearly revealed that SFK-36 produced more ascomycin than ATCC 14891 strain (Fig. 1d). The significant increase in ascomycin yield confirmed that ARTP mutation breeding is an efficient method for generating high-producing mutants. The genetic stability of the SFK-36 mutant strain was investigated by subculturing for 10 generations. In each generation, the strain of each generation was transferred into fresh seed medium and then inoculated into fermentation medium in shake flasks for 7 days. The results showed that ascomycin yield of these generations ranged from 478.2 to 509.1 mg/L (Fig. 1e), indicating no significant difference. In particular, a highly similar DNA fingerprint was observed after 10 generations (Additional file 1: Figure S2), thus, indicating that SFK-36 is a genetically stable mutant strain that can be used for ascomycin high-yield production. Transcriptional analysis of ascomycin biosynthetic gene cluster To understand the possible causes leading to production improvement, the transcriptional analysis of ascomycin biosynthetic gene cluster was investigated. Firstly, RT-PCR was applied to determine co-transcription units in the gene cluster by amplifying the cDNA of S. hygroscopicus ATCC 14891 strain with primers (Additional file 1: Table S1). The genomic DNA of S. hygroscopicus ATCC 14891 strain was used as the control. As shown in Fig. 2a, b, there are totally seven co-transcription units in the ascomycin biosynthetic gene cluster, including fkbW, fkbU, fkbR1/R2, fkbE/F/G, fkbB/C/L/K/J/I/H, fkbO/P/A/D/M, and fkbS/Q/N. Transcriptional analysis of ascomycin biosynthetic gene cluster in S. hygroscopicus. a Genetic organization of co-transcription units in ascomycin biosynthetic gene cluster. b Co-transcriptional analysis of the ascomycin biosynthetic gene cluster by RT-PCR. Genomic DNA (gDNA) and cDNA of S. hygroscopicus 14891 strain were used for PCR amplification. c Relative expression levels of ascomycin biosynthetic gene cluster in S. hygroscopicus SFK-36, compared with those in ATCC 14891 strain. Totally, seven genes were selected to indicate the expression level of co-transcription units (***P < 0.001) To compare the relative expression levels of the ascomycin biosynthetic gene cluster in S. hygroscopicus SFK-36 and ATCC 14891 strain, qRT-PCR was then performed to analyze the samples collected at 3 days and 6 days, respectively. One gene in each of the seven co-transcription units (fkbW, fkbU, fkbR1, fkbE, fkbB, fkbO, and fkbS) was selected to determine the corresponding expression levels. The results showed that the expression levels of four co-transcription units, fkbW, fkbU, fkbB/C/L/K/J/I/H and fkbO/P/A/D/M, were at least twofold higher in S. hygroscopicus SFK-36 than in the ATCC 14891 strain (Fig. 2c). The upregulation of gene products included in these co-transcription units might lead to the improvement of ascomycin production in SFK-36 mutant strain. The influence of carbon sources on ascomycin production To further improve ascomycin production, the effects of five carbon sources on ascomycin production were evaluated individually by replacing the carbon source (20 g/L starch and 40 g/L dextrin) in original medium (Song et al. 2017). The selected carbon source included glucose, glycerol, soluble starch, sucrose, and dextrin (all at a concentration of 60 g/L). The original medium was served as a control. As depicted in Fig. 3a, the biomass of SFK-36 after culture in medium with different carbon sources, ranged from 18.6 to 22.5% (PMV), indicating no significant difference in mycelium growth. Nevertheless, in the medium containing starch, dextrin or glycerol, ascomycin production were all higher relative to the control, and the highest ascomycin yield of 662.7 mg/L was obtained in the soluble starch medium, which is 33.8% higher than that in the original medium (495.3 mg/L). In contrast, glucose and sucrose exerted a negative effect on ascomycin production, and reduced the yield to 312.9 and 188.8 mg/L, respectively. Next, the effects of different soluble starch concentrations on ascomycin production were studied (Fig. 3b). The results revealed that the yield and biomass increased with the soluble starch concentration, from 20 to 80 g/L, and the highest yield was 814.4 mg/L. Effect of different fermentation media on ascomycin production by S. hygroscopicus SKF-36. Effect of different carbon sources (a), soluble starch concentrations (b), nitrogen sources (c), peanut meal concentrations (d), oil (e), and soybean oil concentrations (f) on ascomycin production and biomass of SKF-36 The influence of nitrogen sources on ascomycin production To determine the optimal nitrogen sources for ascomycin production, six commonly used nitrogen sources, soybean powder, peanut meal, gluten meal, yeast powder, corn steep liquor and cottonseed meal were tested at 60 g/L. As shown in Fig. 3c, biomass in the culture media with different starch sources ranged from 27.5 to 30.3%, indicating that all the tested nitrogen sources can effectively support cell growth. The highest ascomycin yield (1093.3 mg/L) was obtained in the fermentation medium containing peanut meal. This yield is 120.7% higher than that in the original medium. In contrast, the addition of yeast powder, gluten meal, and corn steep liquor inhibited ascomycin production, and the yields in medium containing these starch sources were less than 200 mg/L. Then, ascomycin production was assessed at different concentrations of peanut meal. As presented in Fig. 3d, both ascomycin production and biomass increased when peanut meal concentration was increased from 20 to 60 g/L. Therefore, the highest yield of 814.4 mg/L was observed in medium containing 60 g/L peanut meal. The influence of oil on ascomycin production Oil is rich in fatty acids, which can be degraded into acyl-coenzymes A by microbes. Acyl-coenzymes A are recognized as important precursors for the biosynthesis of secondary metabolites in Streptomyces (Chen et al. 2012; Duan et al. 2012; Lu et al. 2016). As shown in Fig. 3e, the five tested oils exerted different actions on ascomycin production. The SFK-36 strain in the soybean oil medium manifested the highest ascomycin yield (1097.9 mg/L) and biomass (29.5%). In contrast, both methyl oleate and butyl oleate decreased the ascomycin yield and biomass. To further improve ascomycin production, different concentrations of soybean oil were added to the fermentation medium (Fig. 3f). The corresponding biomass ranged from 28.7 to 30.7%. Ascomycin yield increased with the addition of soybean oil, but decreased when the concentration of soybean oil was higher than 15 g/L, and the highest ascomycin yield was 1356.7 mg/L. Parameter optimization by RSM for ascomycin production Three factors including soluble starch (A), peanut meal (B) and soybean oil (C) were considered at three different levels. A total of 17 experimental points were obtained, and the corresponding results are listed in Table 2. Finally, multiple regression analysis generated the following equation for modelling ascomycin production (R2 = 0.99 and P < 0.001): $$Y = 1423.30 + 20.09A - 143.80B + 83.84C - 68.95AB - 60.58AC + 2.80BC - 141.46A^{2} - 287.89B^{2} - 233.66C^{2}$$ Results of analysis of variance in this model are given in Table 3 and were employed to estimate the model implications. The F-value was determined next to evaluate the mean square regression and residual of the predictive model. The model F-value of 74.75 indicated that the model was statistically significant. The F-value of lack-of-fit testing was 4.65, which implied that the lack-of-fit was not significantly associated with pure error and that the model well fitted the experimental data. Additionally, the P-value was calculated to determine the significance of the terms and interactions between experimental variables. Analysis revealed that B, C, AB, AC, A2, B2, and C2 were statistically significant model terms (P < 0.05), indicating that ascomycin production can be analyzed and estimated by means of the model. Table 3 ANOVA for response surface quadratic model The three-dimensional (3D) response surface plots and two-dimensional (2D) contour plots representing the predictive equation are shown in Fig. 4. The 3D response surface plots were expected to form a convex shape, and the peak of the plot represents the optimum combination of two tested factors. The 2D contour plots had an oval shape when one factor was fixed, indicating the significant interaction between the two variables. Therefore, the optimal combination predicted by the 3D response surface plots and 2D contour plots was 81.0 g/L soluble starch, 57.4 g/L peanut meal, and 15.8 g/L soybean oil. The effect of mutual interactions among soluble starch,peanut meal and soybean oil on ascomycin production by S. hygroscopicus SKF-36. Response surfaces and contour plots of effects of interactions between soluble starch and peanut meal (a), soluble starch and soybean oil (b), peanut meal and soybean oil (c) on ascomycin production by SKF-36 Validation of the predictive model The optimum combination of the experimental variables (A, B, and C) was predicted via response surface and contour plots. The optimal fermentation medium was found to contain the following components (g/L): soluble starch, 81.0; peanut meal, 57.4; soybean oil, 15.8; MnSO4·H2O, 0.5; K2HPO4·3H2O, 1.0; MgSO4·7H2O, 1.0; and CaCO3, 1.0. SFK-36 was cultured in the optimized medium and original medium separately to evaluate ascomycin production (Fig. 5). In the optimized medium, pH was in the range of 6.0–7.0, which turned out to be lower than that in the original medium during the whole fermentation process (Fig. 5a). Additionally, SFK-36 showed a faster carbon utilization rate in the optimized medium, as the final residual sugar concentrations at the end of fermentation in the optimized and original mediums were 5.2 and 15.1 g/L, respectively (Fig. 5b). The biomass of SFK-36 was 38.5% and 25.8% in the optimized medium and original medium, respectively, at 144 h (Fig. 5c), suggesting that the optimized medium was more suitable for the growth of SFK-36 mycelia. Ascomycin yield reached 1446.3 mg/L at 168 h in the optimized medium (Fig. 5d); this figure was close to the predicted value of 1450.0 mg/L (99.7%). Comparison of fermentation process between optimized medium and original medium by S. hygroscopicus SFK-36 in flask culture. Changes of pH (a), consumption of carbon source (b), time course of growth (c), and ascomycin accumulation (d) of SFK-36 in optimized medium and original medium Effects of fermentation conditions on ascomycin production The influence of pH on production is presented in Fig. 6a. Mycelium growth showed no significant difference when pH was varied from 6.0 to 6.5. Ascomycin yield reached 1421.4 mg/L at pH 6.5. As presented in Fig. 6b, SFK-36 produced relatively high concentrations of ascomycin when the seed age was in the range of 44–48 h. The effect of temperature on ascomycin production is shown in Fig. 6c. Both ascomycin yield and biomass increased when the temperature was increased from 24 to 28 °C, suggesting that higher temperatures promoted mycelium growth and ascomycin accumulation. The highest ascomycin yield of 1451.4 mg/L was obtained at 28 °C. Nonetheless, it exerted a negative effect on ascomycin production and mycelium growth when temperature was higher than 28 °C. Additionally, culture time was an important factor influencing mycelium growth and ascomycin accumulation of SFK-36. The effects of different culture time were studied in the optimized medium (Fig. 6d). The highest ascomycin yield was obtained when SFK-36 was cultured in the optimized medium for 7 days, reaching 1468.2 mg/L. Optimization of fermentation conditions for ascomycin production by S. hygroscopicus SFK-36 in shake flasks. Effect of pH (a), seed age (b), temperature (c), culture time (d) on ascomycin production and biomass of SFK-36 Scale-up fermentation in a 5 L bioreactor Scale-up fermentation of ascomycin by SFK-36 in the optimized medium was carried out in a 5 L fermenter. The time course of the fermentation process is depicted in Fig. 7a. During the early stage of fermentation (0–36 h), total sugar in the fermentation broth rapidly decreased from 80 to 68.1 g/L, while biomass increased gradually to 11.8%. SFK-36 began to produce ascomycin at 36 h. The highest ascomycin concentration (357.9 mg/L) and biomass (17.6%) were observed at 72 h. Afterwards, both ascomycin production and biomass gradually decreased until the end of fermentation. SFK-36 stopped consuming the carbon source in the fermentation broth after 72 h, and total sugar remained relatively constant at 63.5 mg/L. Compared to flask culture, pH in the 5 L bioreactor increased from 6.5 to 8.9 throughout the whole fermentation period. Soluble starch was not utilized after 72 h, indicating that SFK-36 strain cannot efficiently utilize this carbon source when pH is higher than 7.5. Scale-up fermentation of S. hygroscopicus SFK-36 under optimized fermentation conditions in a 5 L fermenter. The pH-controlling strategies were conducted with no controlling (a), or maintained at a constant pH of 6.0 (b), 6.5 (c), 7.0 (d), respectively Therefore, a pH control strategy for the fermentation process was developed to enhance ascomycin production. To determine the optimal control point, the pH was maintained at three levels (6.0, 6.5 and 7.0) starting at 72 h. When pH was maintained at 6.0 (Fig. 7b), total sugar began decreasing stably following an increase in biomass and ascomycin production. On the other hand, abundant residual sugar (10.5 g/L) was observed at the end of fermentation, indicating that the carbon source cannot be completely utilized at pH 6.0. The highest ascomycin yield of 1285.8 mg/L was reached at 168 h. Similar results were observed when pH was maintained at 6.5 (Fig. 7c), except that the carbon source was consumed completely at the end of the fermentation period. Ascomycin yield increased to 1476.9 mg/L. As illustrated in Fig. 7d, when the medium was maintained at pH 7.0, concentration of the carbon source remained high (17.2 g/L), and ascomycin production diminished to 1129.4 mg/L. Therefore, pH control was an effective method for improving ascomycin production in the fermenter. The ARTP mutagenesis can cause greater DNA damage with higher diversity than UV irradiation or chemical mutagens, because of the reactive chemical species produced by the helium-based ARTP system (Ottenheim et al. 2018). DNA structure can be disrupted by these active chemicals, thereby inducing the microbial SOS repair system, which has a high error tolerance rate (Bugay et al. 2015; Zhang et al. 2015). Therefore, a variety of mismatches will be generated in the repair process, resulting in a large number of mutants. Accordingly, the positive mutant rate increased to 11.6%, making it easier to obtain stable high-yield strains. ARTP has been reported as an efficient method for improvement of secondary metabolites in Streptomyces (Ottenheim et al. 2018). In addition, ARTP mutagenesis substantially improved the yield of arachidonic acid produced by Mortierella alpina (Li et al. 2015). However, the development of ascomycin-producing strains by ARTP mutagenesis has not been reported so far. Recently, some efforts have been made to improve ascomycin production (Table 4). Few studies employed traditional mutation breeding methods, such as titanium sapphire laser mutagenesis and shikimic acid enduring screening, to obtain two stable high-yield strains, FS35 and SA68, respectively (Qi et al. 2012, 2014). These two strains were then genetically engineered to enhance ascomycin production. For example, ascomycin production was increased to 438.9 mg/L by overexpression of hcd and ccr in FS35 (Wang et al. 2017). Song et al. overexpressed the gene encoding the regulatory protein FkbR1 and its target gene fkbE in FS35 strain (Song et al. 2017). Additionally, engineered strain TD-ΔPyc-FkbO was constructed from SA68, thus improving ascomycin production to 610.0 mg/L (Qi et al. 2017). Nevertheless, ascomycin yield in these studies was still low to satisfy industrial demand. Compared with the above S. hygroscopicus strains, the SFK-36 mutant strain obtained via ARTP mutagenesis was found to have higher ascomycin yield, suggesting that ARTP mutagenesis is an effective way to enhance ascomycin production in S. hygroscopicus. Table 4 Ascomycin production by different S. hygroscopicus strains The genomes of Streptomyces sp. are characterized by extensive polycistronic organizations (Wei et al. 2018; Zhang et al. 2016b), thus, the whole ascomycin biosynthetic gene cluster was divided into seven co-transcription units, and the transcription of these units was analyzed by RT-PCR. The transcriptional levels of several genes, including fkbW/U/B/C/L/K/J/I/H/O/P/A/D/M, were significantly upregulated. It has been reported that proteins encoded by fkbW and fkbU, participate in the synthesis of ethylmalonyl-CoA, a precursor of ascomycin biosynthesis (Wang et al. 2017). The polyketide synthase (PKS) module consisting of fkbB, fkbC and fkbA, plays an important role in the synthesis of ascomycin skeleton (Wu et al. 2000). Additionally, post-modification of ascomycin structure depends on the modifying enzymes encoded by fkbO, fkbD and fkbM (Qi et al. 2017; Wu et al. 2000). The upregulated transcription of these functional genes could partly explain the improvement in ascomycin production observed in the SFK-36 mutant strain. Both mycelium growth and metabolite accumulation are affected by the composition of the fermentation medium (Chen et al. 2015; Huang et al. 2018). In single-factor experiments, we found that starch and oil as carbon sources and peanut meal as a nitrogen source can stably provide nutrients and energy for microbial growth and ascomycin production. In contrast, glucose and sucrose exerted a negative effect on ascomycin production in this study, which might be explained as that S. hygroscopicus strain could rapidly utilize the glucose and sucrose for growth at the early stage, therefore decreasing the accumulation of secondary metabolites (Deutscher 2008). Similar repressive effect of glucose and glycerol on tacrolimus was observed in S. tsukubaensis (Ordóñez-Robles et al. 2017). To investigate the interactions among different components of the fermentation medium and to obtain multiple responses at the same time, RSM was applied to design experiments and to identify optimum conditions with fewer experimental trials. Furthermore, significant interactions among the factors could be identified (Table 3). The results revealed that the interaction between soluble starch and peanut meal was statistically significant (P = 0.0083). In addition, a statistically significant interaction was also observed between soluble starch and soybean oil (P = 0.0161). In our study, a high-producing S. hygroscopicus SFK-36 mutant strain was obtained by ARTP mutagenesis, and ascomycin production was markedly improved in the optimum fermentation medium designed by RSM. To the best of our knowledge, our methods generated the highest ascomycin yield reported so far. This finding means that the combination of conventional mutagenesis and rational medium optimization is an effective approach for improving ascomycin production. ARTP: atmospheric and room temperature plasma RSM: response surface methodology RT-PCR: reverse transcription-polymerase chain reaction qRT-PCR: quantitative real-time polymerase chain reaction SD: CFU: colony forming unit PMV: packed mycelium volume two-dimensional RAPD: random amplified polymorphic DNA PKS: polyketide synthase Bardur S, Andrzej B, Gail T, André V, Schuttelaar MLA, Xuejun Z, Uwe S, Paul Q, Yves P, Sigurdur K (2015) Safety and efficacy of pimecrolimus in atopic dermatitis: a 5-year randomized trial. Pediatrics 135(4):597. https://doi.org/10.1542/peds.2014-1990 Bérdy J (2005) Bioactive microbial metabolites. J Antibiot 58(1):1–26. https://doi.org/10.1038/ja.2005.1 Bugay AN, Krasavin EA, Parkhomenko AY, Vasilyeva MA (2015) Modeling nucleotide excision repair and its impact on UV-induced mutagenesis during SOS-response in bacterial cells. J Theor Biol 364:7–20. https://doi.org/10.1016/j.jtbi.2014.08.041 Bugelski PJ, Volk A, Walker MR, Krayer JH, Martin P, Descotes J (2010) Critical review of preclinical approaches to evaluate the potential of immunosuppressive drugs to influence human neoplasia. Int J Toxicol 29(5):435–466. https://doi.org/10.1177/1091581810374654 Chaudhary P, Chhokar V, Choudhary P, Kumar A, Beniwal V (2017) Optimization of chromium and tannic acid bioremediation by Aspergillus niveus using Plackett–Burman design and response surface methodology. AMB Express 7(1):201. https://doi.org/10.1186/s13568-017-0504-0 Chen D, Zhang Q, Zhang Q, Cen P, Xu Z, Liu W (2012) Improvement of FK506 production in Streptomyces tsukubaensis by genetic enhancement of the supply of unusual polyketide extender units via utilization of two distinct site-specific recombination systems. Appl Environ Microbiol 78(15):5093–5103. https://doi.org/10.1128/AEM.00450-12 Chen M, Xu T, Zhang G, Zhao J, Gao Z, Zhang C (2015) High-yield production of lipoglycopeptide antibiotic A40926 using a mutant strain Nonomuraea sp. DP-13 in optimized medium. Prep Biochem Biotechnol 46(2):171–175. https://doi.org/10.1080/10826068.2015.1015561 Deutscher J (2008) The mechanisms of carbon catabolite repression in bacteria. Curr Opin Microbiol 11(2):87–93. https://doi.org/10.1016/j.mib.2008.02.007 Du ZQ, Yuan Z, Qian ZG, Han X, Zhong JJ (2017) Combination of traditional mutation and metabolic engineering to enhance ansamitocin P-3 production in Actinosynnema pretiosum. Biotechnol Bioeng. https://doi.org/10.1002/bit.26396 Duan S, Yuan G, Zhao Y, Li H, Ni W, Sang M, Liu L, Shi Z (2012) Enhanced cephalosporin C production with a combinational ammonium sulfate and DO-Stat based soybean oil feeding strategy. Biochem Eng J 61(4):1–10. https://doi.org/10.1016/j.bej.2011.11.011 Fu G, Li R, Li K, Hu M, Yuan X, Li B, Wang F, Liu C, Wan Y (2016) Optimization of liquid-state fermentation conditions for the glyphosate degradation enzyme production of strain Aspergillus oryzae by ultraviolet mutagenesis. Prep Biochem Biotechnol 46(8):780–787. https://doi.org/10.1080/10826068.2015.1135462 Ho VC, Gupta A, Kaufmann R, Todd G, Vanaclocha F, Takaoka R, Fölster-Holst R, Potter P, Marshall K, Thurston M (2003) Safety and efficacy of nonsteroid pimecrolimus cream 1% in the treatment of atopic dermatitis in infants. J Pediatr 142(2):155–162. https://doi.org/10.1067/mpd.2003.65 Huang J, Ou Y, Zhang D, Zhang G, Pan Y (2018) Optimization of the culture condition of Bacillus mucilaginous using Agaricus bisporus industrial wastewater by Plackett–Burman combined with Box–Behnken response surface method. AMB Express 8(1):141. https://doi.org/10.1186/s13568-018-0671-7 Ianiri G, Clancey SA, Lee SC, Heitman J (2017) FKBP12-Dependent inhibition of calcineurin mediates immunosuppressive antifungal drug action in Malassezia. MBio. https://doi.org/10.1128/mBio.01752-17 Jiang Y, Shang YP, Li H, Zhang C, Pan J, Bai YP, Li CX, Xu JH (2017) Enhancing transglutaminase production of Streptomyces mobaraensis by iterative mutagenesis breeding with atmospheric and room-temperature plasma (ARTP). Bioresour Bioprocess 4(1):37. https://doi.org/10.1186/s40643-017-0168-2 Ju XM, Wang DH, Zhang GC, Cao D, Wei GY (2015) Efficient pullulan production by bioconversion using Aureobasidium pullulans as the whole-cell catalyst. Appl Microbiol Biotechnol 99(1):1–10. https://doi.org/10.1007/s00253-014-6100-1 Lee NK (2018) Statistical optimization of medium and fermentation conditions of recombinant Pichia pastoris for the production of xylanase. Biotechnol Bioprocess Eng 23(1):55–63. https://doi.org/10.1007/s12257-017-0262-5 Li X, Liu R, Li J, Chang M, Liu Y, Jin Q, Wang X (2015) Enhanced arachidonic acid production from Mortierella alpina combining atmospheric and room temperature plasma (ARTP) and diethyl sulfate treatments. Bioresour Technol 177(177C):134–140. https://doi.org/10.1016/j.biortech.2014.11.051 Livak KJ, Schmittgen TD (2001) Analysis of relative gene expression data using real-time quantitative PCR and the 2−ΔΔCT method. Methods 25(4):402–408. https://doi.org/10.1006/meth.2001.1262 Lu C, Zhang X, Jiang M, Bai L (2016) Enhanced salinomycin production by adjusting the supply of polyketide extender units in Streptomyces albus. Metab Eng 35:129–137. https://doi.org/10.1016/j.ymben.2016.02.012 Martins M, Sarmento D, Oliveira MM (2004) Genetic stability of micropropagated almond plantlets, as assessed by RAPD and ISSR markers. Plant Cell Rep 23(7):492–496. https://doi.org/10.1007/s00299-004-0870-3 Ordóñez-Robles M, Santos-Beneit F, Albillos SM, Liras P, Martín JF, Rodríguez-García A (2017) Streptomyces tsukubaensis as a new model for carbon repression: transcriptomic response to tacrolimus repressing carbon sources. Appl Microbiol Biotechnol 101(22):8181–8195. https://doi.org/10.1007/s00253-017-8545-5 Ottenheim C, Nawrath M, Wu JC (2018) Microbial mutagenesis by atmospheric and room-temperature plasma (ARTP): the latest development. Bioresour Bioprocess 5(1):12. https://doi.org/10.1186/s40643-018-0200-1 Qi H, Xin X, Li S, Wen J, Chen Y, Jia X (2012) Higher-level production of ascomycin (FK520) by Streptomyces hygroscopicus var. ascomyceticus irradiated by femtosecond laser. Biotechnol Bioprocess Eng 17(4):770–779. https://doi.org/10.1007/s12257-012-0114-2 Qi H, Zhao S, Wen J, Chen Y, Jia X (2014) Analysis of ascomycin production enhanced by shikimic acid resistance and addition in Streptomyces hygroscopicus var. ascomyceticus. Biochem Eng J 82:124–133. https://doi.org/10.1016/j.bej.2013.11.006 Qi H, Lv M, Song K, Wen J (2017) Integration of parallel 13C-labeling experiments and in silico pathway analysis for enhanced production of ascomycin. Biotechnol Bioeng 114(5):1036–1044. https://doi.org/10.1002/bit.26223 Ren F, Chen L, Tong Q (2017) Highly improved acarbose production of Actinomyces through the combination of ARTP and penicillin susceptible mutant screening. World J Microbiol Biotechnol 33(1):16. https://doi.org/10.1007/s11274-016-2156-7 Sikora S, Redzepovic SI, Kozumplik V (1997) Genetic diversity of Bradyrhizobium japonicum field population revealed by RAPD fingerprinting. J Appl Microbiol 82(4):527–531. https://doi.org/10.1046/j.1365-2672.1997.00140.x Sivaramakrishnan R, Incharoensakdi A (2017) Enhancement of lipid production in Scenedesmus sp. by UV mutagenesis and hydrogen peroxide treatment. Bioresour Technol 235:366. https://doi.org/10.1016/j.biortech.2017.03.102 Song K, Wei L, Liu J, Wang J, Qi H, Wen J (2017) Engineering of the LysR family transcriptional regulator FkbR1 and its target gene to improve ascomycin production. Appl Microbiol Biotechnol 101(11):4581–4592. https://doi.org/10.1007/s00253-017-8242-4 Taieb A, Alomar A, Böhm M, Dell'Anna ML, Pase AD, Eleftheriadou V, Ezzedine K, Gauthier Y, Gawkrodger DJ, Jouary T (2013) Guidelines for the management of vitiligo: the European dermatology forum consensus. Br J Dermatol 168(1):5. https://doi.org/10.1111/j.1365-2133.2012.11197.x Wang J, Cheng W, Song K, Wen J (2017) Metabolic network model guided engineering ethylmalonyl-CoA pathway to improve ascomycin production in Streptomyces hygroscopicus var. ascomyceticus. Microb Cell Fact 16(1):169. https://doi.org/10.1186/s12934-017-0787-5 Wang WY, Yang SB, Wu YJ, Shen XF, Chen SX (2018) Enhancement of A82846B yield and proportion by overexpressing the halogenase gene in Amycolatopsis orientalis SIPI18099. Appl Microbiol Biotechnol 102(13):5635–5643. https://doi.org/10.1007/s00253-018-8983-8 Wei K, Wu Y, Li L, Jiang W, Hu J, Lu Y, Chen S (2018) MilR2, a novel TetR family regulator involved in 5-oxomilbemycin A3/A4 biosynthesis in Streptomyces hygroscopicus. Appl Microbiol Biotechnol 102(20):8841–8853. https://doi.org/10.1007/s00253-018-9280-2 Wu K, Chung L, Revill WP, Katz L, Reeves CD (2000) The FK520 gene cluster of Streptomyces hygroscopicus var. ascomyceticus (ATCC 14891) contains genes for biosynthesis of unusual polyketide extender units. Gene 251(1):81–90. https://doi.org/10.1016/S0378-1119(00)00171-2 Xin X, Qi H, Wen J, Jia X, Chen Y (2015) Reduction of foaming and enhancement of ascomycin production in rational Streptomyces hygroscopicus fermentation. Chin J Chem Eng 23(7):1178–1182. https://doi.org/10.1016/j.cjche.2014.04.006 Zhang X, Zhang C, Zhou QQ, Zhang XF, Wang LY, Chang HB, Li HP, Oda Y, Xing XH (2015) Quantitative evaluation of DNA damage and mutation rate by atmospheric and room-temperature plasma (ARTP) and conventional mutagenesis. Appl Microbiol Biotechnol 99(13):5639. https://doi.org/10.1007/s00253-015-6678-y Zhang C, Shen H, Zhang X, Yu X, Wang H, Xiao S, Wang J, Zhao Z (2016a) Combined mutagenesis of Rhodosporidium toruloides for improved production of carotenoids and lipids. Biotechnol Lett 38(10):1–6. https://doi.org/10.1007/s10529-016-2148-6 Zhang XS, Luo HD, Tao Y, Wang YY, Jiang XH, Jiang H, Li YQ (2016b) FkbN and Tcs7 are pathway-specific regulators of the FK506 biosynthetic gene cluster in Streptomyces tsukubaensis L19. J Ind Microbiol Biotechnol 43(12):1693–1703. https://doi.org/10.1007/s10295-016-1849-0 Corresponding author SC and DJ designed the experiment. ZY contributed to the experimental design, experimental operation, data analysis and manuscript preparation. XS, YW and SY were responsible for the manuscript review and editing. All authors read and approved the final manuscript. This study was funded by the National Major Scientific and Technological Special Project for "Significant New Drugs Development" (2014ZX09201001-005-001). Department of Microbiological and Biochemical Pharmacy, The Key Laboratory of Smart Drug Delivery, Ministry of Education, School of Pharmacy, Fudan University, Shanghai, 201203, China Zhituo Yu & Dianwen Ju Shanghai Institute of Pharmaceutical Industry, China State Institute of Pharmaceutical Industry, Shanghai, 201203, China Zhituo Yu, Xiaofang Shen, Yuanjie Wu, Songbai Yang & Shaoxin Chen Zhituo Yu Xiaofang Shen Yuanjie Wu Songbai Yang Dianwen Ju Shaoxin Chen Correspondence to Dianwen Ju or Shaoxin Chen. Sequences of primer pairs used in this study. Figure S1. Structure of ascomycin and tacrolimus. Figure S2. Genetic stability analysis of S. hygroscopicus SFK-36 by RAPD. M: Marker; G0: RAPD map of parent strain; G10: RAPD map of the strain subcultured for 10 times; RAPD-1, RAPD-2 and RAPD-3: primers were shown at Table S1. Yu, Z., Shen, X., Wu, Y. et al. Enhancement of ascomycin production via a combination of atmospheric and room temperature plasma mutagenesis in Streptomyces hygroscopicus and medium optimization. AMB Expr 9, 25 (2019). https://doi.org/10.1186/s13568-019-0749-x Streptomyces hygroscopicus Ascomycin Medium optimization
CommonCrawl
Search all SpringerOpen articles Journal of Software Engineering Research and Development DyeVC: an approach for monitoring and visualizing distributed repositories Cristiano Cesario1, Ruben Interian1 & Leonardo Murta ORCID: orcid.org/0000-0002-5173-12471 Journal of Software Engineering Research and Development volume 5, Article number: 5 (2017) Cite this article Software development using distributed version control systems has become more frequent recently. Such systems bring more flexibility, but also greater complexity to manage and monitor multiple existing repositories as well as their myriad of branches. In this paper, we propose DyeVC, an approach to assist developers and repository administrators in identifying dependencies among clones of distributed repositories. It allows understanding what is going on around one's clone and depicting the relationship between existing clones. DyeVC was evaluated over open source projects, showing how they could benefit from having such kind of tool in place. We also ran an observational and a performance evaluation over DyeVC, and the results were promising: it was considered easy to use and fast for most repository history exploration operations while providing the expected answers. Version Control Systems (VCS) date back to the 70s when SCCS emerged (Rochkind 1975). Their primary purpose is to keep software development under control (Estublier 2000). Along these almost 40 years, VCSs have evolved from a centralized repository with local access (e.g., SCCS and RCS (Tichy 1985)) to a client-server architecture (e.g., CVS (Cederqvist 2005) and Subversion (Collins-Sussman et al. 2011)). More recently, distributed VCSs (DVCS) arose (e.g., Git (Chacon 2009) and Mercurial (O'Sullivan 2009a)) allowing clones of the entire repository in different locations. According to a survey conducted by the Eclipse community (2014), Git and GitHub combined usage increased from 6.8 to 42.9% between 2010 and 2014 (a growth greater than 500%). During this same period, Subversion and CVS combined usage decreased from 71 to 34.4%. This clearly shows momentum and a strong tendency in the adoption of DVCSs in the open source community. Besides these changes from local to client-server and then to a distributed architecture, the concurrency control policy adopted by VCSs also changed from lock-based (pessimistic) to branch-based (optimistic). According to Walrad and Strom (Walrad and Strom 2002), creating branches in VCSs is essential to software development because it enables parallel development, allowing the maintenance of different versions of a system, the customization to different platforms/customers, among other features. DVCSs include better support for working with branches (O'Sullivan 2009b), turning the branch creation into a recurring pattern, no matter if this creation is explicitly done by executing a "branch" command or implicitly when a repository is cloned. However, distributed software development, especially from the geographical perspective (Gumm 2006), brings a set of risk factors, and Configuration Management (CM) is affected by them. The increasing growth of development teams and their distribution along distant locations, together with the proliferation of branches, introduce additional complexity for perceiving actions performed in parallel by different developers. According to Perry et al. (1998), concurrent development increases the number of defects in software. Besides, da Silva et al. (2006) say that branches are frequently used for promoting isolation among developers, postponing the perception of conflicts that result from changes made by co-workers. These conflicts are noticed only after pulling changes in the context of DVCSs. Moreover, Brun et al. (2011) show that even using modern DVCSs, conflicts during merges are frequent, persistent, and appear not only as overlapping textual edits (i.e., physical conflicts) but also as subsequent build (i.e., syntactic conflicts) and test failures (i.e., semantic conflicts). By enabling repository clones, DVCSs expand the branching possibilities discussed by Appleton et al. (1998), allowing several repositories to coexist with fragments of the project history. This may lead to complex topologies where changes can be sent to or received from any clone. This scenario generates traffic similar to that of peer-to-peer applications. In practice, projects impose some restrictions on this topology freedom. However, it can be still much more complex than the traditional client-server topology found in centralized VCS. With this diversity of topologies, managing the evolution of a complex system becomes a tough task, making it difficult to find answers to the following questions: Q1: Which clones were created from a repository? Q2: What are the communication paths among different clones? Q3: Which changes are under work in parallel (in different clones or different branches) and which of them are available to be incorporated into others' clones? Most of the existing works, such as Palantir (Sarma and van der Hoek 2002), FASTDash (Biehl et al. 2007), Lighthouse (da Silva et al. 2006), CollabVS (Dewan and Hegde 2007), Safe-Commit (Wloka et al. 2009), Crystal (Brun et al. 2011), and WeCode (Guimarães and Silva 2012), deal with question Q3, giving to the developers awareness of concurrent changes. However, they do not provide an overview of the topology of repositories, indicating which commits belong to which clones. This overview is essential to understand the distributed evolution of the project. To answer the questions above, we propose DyeVC,Footnote 1 a novel monitoring and visualization approach for DVCS that gathers information about different repositories and presents them visually to the user. DyeVC allows developers to perceive how their repository evolved over time and how this evolution compares to the evolution of other repositories in the project. DyeVC's main goal is two-fold: increasing the developers' knowledge of what is going on around their repository and the repositories of their teammates, and enabling repository administrators to visualize the relationship between existing clones. DyeVC was evaluated over open source projects, showing how they could benefit from having such kind of tool in place. We also ran an observational and a performance evaluation over DyeVC, and the results were promising: it was considered easy to use and fast for most repository history exploration operations while providing the expected answers. This paper extends a previous conference paper (Cesario and Murta 2016) by including a more thorough discussion of our approach, including how DyeVC discovers the topology and a formal definition of the process underneath DyeVC. Moreover, as the previous version of DyeVC struggled when dealing with large repositories (over 6500 commits), we also added an automatic collapsing feature. This new feature provides a dual contribution: it allowed DyeVC to deal with larger repositories and reduced cluttering when presenting information to users. The performance evaluation was expanded to present an assessment of the automatic collapsing feature. Finally, we included a deeper comparison of DyeVC with its related work. This paper is organized as follows: Section 2 shows a motivational example. Section 3 presents the DyeVC approach. Section 4 presents the technologies used in our prototype implementation. Section 5 describes the evaluation of DyeVC. Section 6 discusses related work, and Section 7 concludes the paper and presents some suggestions for future work. Motivational example Figure 1 shows a scenario with some developers, each one owning a clone of the repository created at Xavier Institute. Xavier Institute acts like a central repository, where code developed by all teams is integrated, tested, and released to production. There is a team working at Xavier Institute, led by Professor Xavier, and a remote developer (Storm) that periodically receives updates from the Institute. Outside the Institute, Wolverine leads a remote team located in a different site, which is constantly synchronized with the Institute. Solid lines in Fig. 1 indicate data being pushed, whereas dotted lines indicate data being pulled. Thus, for example, Rogue can both pull updates from Gambit and push updates to him, and Beast can pull updates from Rogue, but cannot push updates to her. Development scenario involving some developers Each one of the developers has a complete copy of the repository. Luckily, this scenario has a CM Plan in action. Otherwise, each one would be able to send and receive updates to and from any other, leading to a total of n × (n − 1) different possibilities of communication (where n is the number of developers in the topology). In practice, however, this limit is not reached: while interaction amongst some developers is frequent, it may happen that others have no idea about the existence of some coworkers. It occurs with Mystique and Nightcrawler, for example, where there is no direct communication. As an example, from a developer's point of view, like Beast, questions such as the following can arise: How can he know at a given moment if there are commits in Rogue, in Gambit, or in Nightcrawler clones that were not pulled yet? Suppose that Beast is working on a feature that depends on a utility class developed by Gambit. If such class has a bug, and Gambit is working to solve it, Beast would want to know when Gambit's commit is ready to be pulled. Moreover, if Gambit is evolving such class, and Beast has this information, he could decide to anticipate a pull to incorporate Gambit's changes in his workspace. Would it be the case that local commits are pending to be pushed to Gambit? Beast could certainly periodically pull changes from his peers, checking if there were updates available, but this would be a manual procedure, prone to be forgotten. It would be more practical if Beast could have an up to date knowledge of his peers, warning him about any local or remote updates that had not been synchronized yet. On the other hand, from an administrator's point of view, questions such as the following are pertinent: How can she knows which are the existing clones of a project and how they relate to each other? This is a common need to repository administrators. It helps not only in identifying who must be notified regarding any news related to the repository but also helps in visually verifying if pull/push policies are being followed by the team. Having a map of all existing clones can help repository administrators in identifying who is pushing to / pulling from each other. For instance, unauthorized access to push to a production repository can be visualized, and the administrator can take actions to revoke such access. How can she know if there are pending commits to be sent from a staging repository to a production one? Having the ability to know how many commits are pending and which commits are these can help administrators decide if this is the right time to release a new version of the system to production. DyeVC approach Aiming at supporting both developers and repository administrators in understanding the interaction among repository clones, the main features of DyeVC include: (1) a mechanism to gather information from a set of clones (such as their relationships and known commits) and (2) a set of extensible views with different levels of detail, which let DyeVC users visualize this information. We detail in the following sub-section how DyeVC gathers information from DVCSs. Next, we discuss how this information is presented using different levels of detail. Finally, we show what happens behind the scenes, discussing the algorithm involved in the data synchronization process. Information gathering DyeVC continuously gathers information from interrelated clones, starting from clones registered by the user. Figure 2 shows a deployment view of DyeVC's architecture. For each clone rep that the user registers to monitor, DyeVC transparently creates a local clone rep' in the user's home folder to fetch data from all of the peers with which rep communicates. Data is gathered by DyeVC instances running at each user machine and is stored in a central document database. In this way, information from one DyeVC instance is made available to every other instance in the topology. How DyeVC gathers information DyeVC gathers information from registered clones in the user's machine and also from their peers, which are clones that communicate with them. Since there is a communication path between a registered clone and its peers (either to push data or to pull data), we can analyze the commits that exist in these peers. This allows us to present a broader topology visualization that contains not only registered clones, but also those that have a push or pull relationship with them. DyeVC finds out related clones by looking at the remote repositories registered in the DVCS configuration. More details on how data is gathered are explained in section 3.3. Figure 3 shows how DyeVC discovers the topology from the nodes where it is running and the registered clones. Blue nodes represent registered clones where DyeVC is running, yellow nodes represent known clones located at nodes where DyeVC is not running, dashed nodes and dashed lines represent clones and communication paths, respectively, that are not known yet. Suppose a scenario where the existing clones and interdependencies are shown in Fig. 3a, which depicts the same scenario shown in Section 2 but here represented by the first letter of each clone. After installing DyeVC and registering clone X, DyeVC finds out that this clone communicates with clones W, P, and S (either by pushing to or pulling from them), as shown in Fig. 3b. Later on, clone P is registered and clones C, J and M are included as known clones in the topology (Figure 3c). Clone G is the next to be registered, allowing DyeVC to discover that clones R, N, and B also exist, as well as the communication between clone G and clone W, which was already a known clone (Fig. 3d). Assuming that no other clones are registered, the known topology is shown in Fig. 3e. Notice that, although only clones G, P, and X were registered, DyeVC is also aware of the existence of clones B, C, J, M, N, S and W. Only some communication paths between clones will not be known (C-J, J-M, S-M, R-B, R-N and N-B). DyeVC discovering the topology: actual topology (a), discovered topology with X (b), discovered topology with X and P (c), discovered topology with X P and G (d), final discovered topology (e) DyeVC finds out related clones by looking at the remote repositories, which are registered in Git's config file of each clone. Figure 4 shows an example of this configuration, taken from a local clone of the DyeVC project, where there is a remote named origin, which is located at github.com/gems-uff/dyevc . This information is in the url parameter, which indicates to Git that pushes and pulls use the same location. If there were a pushurl parameter in the configuration, besides the url parameter, pulls would use the location in the url parameter and pushes would use the location in the pushurl parameter. Remote repository configuration in Git's config file Data stored in the central database follows the metamodel presented in Fig. 5. A Project groups repository clones of the same system. Clones are stored as RepositoryInfo and are identified by an id and a meaningful clone name provided by the user (cloneName attribute). A RepositoryInfo has a list of clones to which it pushes data and a list of clones from which it pulls data. These lists are represented respectively by the self-associations pushesTo and pullsFrom. Finally, a RepositoryInfo stores the hostName where it resides (e.g., a server name or localhost), its clonePath (be it an operating system path or an URL) and the set of DyeVC instances that have registered it to be monitored (monitoredBy attribute). Metamodel used to store DyeVC data Branches are part of a RepositoryInfo. A Branch has a name and a boolean attribute isTracked, which is true if the branch tracks a remote branch. A RepositoryInfo may have one or many branches (it must have at least one branch, which is the main one). A Branch has two associations with CommitInfo: through the first association, a Branch knows which commit is its head and, conversely, a commit knows which branches point to it as a head (headOf association end). The second association represents which commits are reachable from a given branch (reachableCommits association end) and, conversely, the branches from which the commit is reachable (reachableFrom association end). The finer grain of information is the CommitInfo, which represents each commit in the topology. A commit is identified by a hash code (hash attribute) and refers to its parents (except for the first commit in the repository, which does not have any parent). As each commit may not exist in all clones of the topology, we store the list of clones where each commit can be found (foundIn association end). We also store the committer, the commit message (shortMessage attribute), and whether the commits belong to tracked or non-tracked branches (tracked attribute). DyeVC presents information at four different levels of detail: Level 1 shows high-level notifications about registered repositories; Level 2 shows the whole topology of a given project. Level 3 zooms into the branches of the repository, showing the status of each tracked branch. Lastly, Level 4 zooms into the commits of the repository, showing a visual log with information about each commit. The following sections discuss these levels. Level 1: Notifications In Level 1, our approach periodically monitors registered repositories and presents notifications whenever a change is detected in any known peer. The period between subsequent runs is configurable, and notifications are presented in the system notification area, in a non-obtrusive way. Figure 6 shows an example of this kind of notification, where DyeVC detected changes in two different repositories. The notification shows the repository id, the clone name, and the project (system) name. Clicking on the balloon opens DyeVC main screen. DyeVC showing notifications in notification area Level 2: Topology Aiming at helping to answer questions Q1 and Q2, we present a topology view showing all repositories for a given project (Fig. 7), where each node represents a known clone. A blue computer represents the current user clone, and black computers represent other clones where DyeVC is running. Servers represent central repositories that do not pull from nor push to any other clone, or clones where DyeVC is not running. Both kinds of nodes use the same representation because, once DyeVC is not running at a given clone, we cannot infer the pushesTo and pullsFrom lists, which will thus be empty as in a server. At first sight, this could be understood as a risk within topology view. However, DyeVC considers servers as clones. The denomination "server" is just to visually differentiate it from other clones. We believe that plotting servers and clones where DyeVC is not running with the same icons is not a risk because the topology view brings more information about the clones (e.g., clone address and name). Thus, a repository administrator can distinguish the servers among the plotted clones. Topology view for a given project Each edge in the graph represents a relationship between two repositories. Continuous edges mean that the source clone pushes to the destination clone, whereas dashed edges mean that the destination clone pulls from the source clone. The edge labels show two numbers separated by a dash. The first and second numbers represent how many commits in tracked and non-tracked branches of the source clone are missing in the destination clone, respectively. The edge colors are used to represent the synchronization status: green edges mean that both clones are synchronized (i.e., the destination clone has all the commits present in the source clone), whereas red edges mean that the pair is not synchronized and indicates the direction that is missing commits. For example, it is possible to observe in Fig. 7 that the current user clone (blue computer) is hosted at cmcdell and is named dyevc. This clone pulls from gems-uff/dyevc, which is located at github.com , and there are four tracked commits ready to be pulled (i.e., commits that exist in the remote repository and do not exist locally). It also pushes to the same peer, having five tracked commits ready to be pushed. In this case, both edges are red, which raises attention to investigate further what is happening, because such situation may lead to integration conflicts. Level 3: Tracked branches For helping answering question Q3, DyeVC's main screen (see Fig. 8) shows Level 3 information, allowing one to view the status of each tracked branch in registered repositories regarding their peers. This information is complemented with that of Level 4, shown in the next section. DyeVC main screen The status evaluation considers the existing commits in each repository individually. Due to the nature of DVCS, old data is almost never deleted, and commits are cumulative. Thus, if commit N is created over commit N – 1, the existence of commit N in a given repository implies that commit N – 1 also exists in the repository. In this way, by using set theory, it is possible to subtract the set of commits in the local repository from the set of commits in its peers, resulting in the set of commits not pulled yet. In this case, local repository will be behind its peers (arrow down in Fig. 8). Conversely, subtracting the sets in the inverse order will result in the set of commits not pushed yet, meaning that local repository is ahead of its peers (arrow up). When both sets are empty, local repository is synchronized (green checkmark in Fig. 8) and when both sets have elements, it is both ahead and behind its peer (arrow up and down in Fig. 8). Let us assume that each commit is represented by an integer number to illustrate how our approach works. At a giving moment, the local repositories of each developer have the commits shown in Table 1. Consider the synchronization paths presented in the right-hand side of Fig. 1, where the perception of each developer regarding their known peers is shown in Table 2. Notice that the perceptions are not symmetric. For instance, as Gambit does not pull updates from Nightcrawler, there is no sense in giving him information regarding Nightcrawler. Furthermore, it is uncommon to have a scenario where pushes are performed from a developer to another (such as the one between Beast and Gambit). What happens is that a developer pulls from another (for example, between Gambit and Nightcrawler), avoiding inadvertent inclusion of commits inside others' clones. Although infrequent, this scenario helps in understanding the need to have awareness about who are the peers in a project and what are their interdependencies. Table 1 Existing commits in each repository Table 2 Status of each repository based on known remote repositories Level 4: Commits Level 4 complements information of Level 3 to provide an answer to Question Q3. Differently from the usual repository version graph, it presents a combined version graph of the entire topology (Fig. 9). Each vertex in the graph represents either a known commit in the topology, which is named after its hash's five first characters (e.g., the node labeled 2e10a in Fig. 9), or a collapsed node, representing several commits blended. We implement two ways of collapsing nodes to provide a better understanding over huge amounts of data: manual and automatic. Manually collapsed nodes are named after the number of contained nodes, such as the white node containing 118 commits and the green node containing 24 commits in Fig. 9). Automatically collapsed nodes have ellipses before and after the number of contained nodes in their names (if the first collapse of Fig. 9 were automatic, its name would be "…118…"). Automatic collapsing is detailed in Section 3.2.5. Collapsed commit history Thicker borders denote that the commit is a branch's head (e.g., commit ea6a4). Commits are drawn according to their precedence order. Thus, if a commit N is created over a commit N – 1, then commit N will be located to the right of commit N – 1. For each commit, DyeVC presents the information described in Fig. 5 (gathered from the central database), along with information that is read in real time from the repository metadata, such as branches that point to that commit and affected files (added, edited, and deleted). This visualization contains all commits of all clones in an integrated graph. Each commit is painted according to its existence in the local repository and the peers' repositories. Ordinary commits that exist locally and in all peers are painted in white. Green commits are ready to be pushed, as they exist locally but do not exist in peers on the push list. Yellow commits need attention because they exist in at least one peer in the pull list, but do not exist locally, meaning that they may be pulled. Red commits do not exist locally and are not available to be pulled, as they exist only in clones that are not peers. Finally, gray commits belong to non-tracked branches, so they can neither be pushed nor pulled. Heads of these branches are not identified with thicker borders. This visualization can easily have thousands of nodes, one for each commit in the topology. Nevertheless, despite the high number of nodes, users are usually interested in the most recent commits. As we show the commits following a chronological order, from left to right, most recent commits will be at the right part of the visualization. DyeVC positions the graph so that these commits are shown when opening the visualization. Automatic collapsing As previously discussed, the first version of DyeVC struggled when dealing with larger repositories (over 6500 commits). This limitation was mainly due to the memory used to represent commit nodes in the commit history graph. However, we observed that many of the commit nodes are unnecessary for comprehending the evolution and, in fact, were cluttering the visualization. For instance, a sequence of 20 commit nodes that are ordinary revisions and that belong to all clones (i.e., all have the same white color) could be collapsed into just one commit node, avoiding visualization cluttering and boosting performance. This observation motivated us to design and implement an automatic collapsing feature for DyeVC. We identified two common node structures that can be automatically collapsed: sequential and parallel. The former contains a sequence of commits of the same type, where each of them has degree two, i.e., nodes with just one ancestor and one successor. This kind of structure can be collapsed because it does not represent any additional information besides the fact that some sequential work was performed. Figure 10 shows examples of sequences of commits, highlighted in red, which could be collapsed, producing the graph shown in Fig. 11 (still in red). On the other hand, the later contains one fork node and one merge node, with at most one (regular or collapsed) 2-degree node in each branch, between the fork and the merge nodes. Figure 11 shows examples highlighted in yellow of this parallel structure. The result of the collapse is shown in Fig. 12. The numbers inside the red and yellow circles refer to the number of collapsed nodes. Sequential structures before automatic collapsing Sequential structures after automatic collapsing and Parallel structures before automatic collapsing Parallel structures after automatic collapsing We implemented an iterative algorithm that works in phases to benefit from both sequential and parallel collapse strategies together. The algorithm is shown in Fig. 13. Automatic collapsing algorithm The algorithm receives the commit graph and the number of iterations as parameters. Each iteration is executed in linear time complexity. The first phase collapses sequential structures (lines 2–11). The set of visited nodes is initialized as an empty set in line 2. Each node is inspected in a loop (lines 3–11). If the node is still not visited, the presence of a linear commit chain is tested in line 5 by examining predecessors and successors of the node. All found linear commit chain elements are marked as visited in line 6. If the linear commit chain has more than one node, it is collapsed (lines 7–9). The second phase of the algorithm collapses parallel structures (lines 12–28). The visited set is reinitialized as an empty set in line 12. All nodes of the graph are examined again in a loop (lines 13–28). If the element is still not visited, and we are in the presence of a fork node (condition in line 14), the parallel structure generated by this fork is analyzed. Both nodes afterward the fork are saved in variables a and b (lines 15–16). Initially, a group set is initialized containing a single element, the fork node, in line 17. If the parallel structure resembles the 4-node group highlighted in yellow in Fig. 11, then the group to be collapsed is populated in lines 18–19. On the other hand, if the parallel structure resembles the 3-node group highlighted in yellow in Fig. 11, then the group to be collapsed is populated in lines 20–21. The visited set is updated in line 23. If the created node group has more than one node, it is collapsed in lines 24–26. The phases of the algorithm can be repeated, as collapsing parallel structures may lead to new sequential structures. For instance, after applying parallel collapses over the graph shown in Fig. 11, a new sequential structure is formed, as illustrated in Fig. 12. The iteration would lead to a new collapse, and so on. As previously discussed, collapses are performed just for commits of the same type (same color, discussed in section 3.2.4), reducing the size of the graph without compromising the quality of the information shown in the graph. The process underneath DyeVC can be formally defined using Set Theory, in order to describe how the data is structured and how DyeVC can play with this data to identify the repositories that are ahead or behind of other repositories, showing the commits that are missing or that belong to specific branches. We can define a project p as a tuple (R, C, C database), where R is the set of all cloned repositories of p monitored by DyeVC, C is the set of all commits of p, and C database ⊆ C is the set of commits of p in the DyeVC database. Each repository r i ∈ R is a tuple \( \left({R}_i^{push},{R}_i^{pull},{C}_i^{previous},{C}_i^{current},{B}_i\right) \), where \( {R}_i^{push}\subseteq R \) is the set of repositories that r i is allowed to push to, \( {R}_i^{pull}\subseteq R \) is the set of repositories that r i is allowed to pull from, \( {C}_i^{previous}\subseteq C \) is the set of commits in r i in the previous execution of DyeVC, \( {C}_i^{current}\subseteq C \) is the set of commits in r i in the current execution of DyeVC, and B i is the set of named branches of r i (Fig. 14a). It is worth noting that, as C is the set of all commits of project p (i.e., the domain of commits of p), any set of commits that belong to specific repositories r i ∈ R, such as \( {C}_i^{previous} \) and \( {C}_i^{current} \), should also belong to C. UML class diagram representing the DyeVC formalization (a) and a directed acyclic graph of commits (b) Each commit c j ∈ C has a set of parent commits \( {C}_j^{parent}\subset C \). Commits are organized in a directed acyclic graph (Fig. 14b), where the first commit of the project has no parent (e.g., commit A in Fig. 14b), revision commits have only one parent (e.g., commit B in Fig. 14b), and merge commits have two or more parents (e.g., commit I in Fig. 14b). All reachable commits from c j form its history, including c j itself and the transitive closure over its parents (e.g., {A, B, E, F, H, I, J} is the history of commit J in Fig. 14b). The history of c j ∈ C is formally defined as: $$ {H}_j=\left\{c\in C|c={c}_j\vee \exists {c}_k:\left({c}_k\in {C}_j^{parent}\wedge c\in {H}_k\right)\right\} $$ At this point, it is important to notice that ordering is not important for accounting which commits belong to each repository. The only situation in which ordering is important is when DyeVC plots the commit history graph. In this case, DyeVC accesses the tip of the branches (fast operation, as each branch has a reference to its tip) and traverses its transitive closure for plotting all previous commits (also fast, because each commit has a reference to its parents). Note that the commit graph is a directed acyclic graph (DAG), and this DAG is already represented in terms of pointers in C. The sets of previous and current commits in a repository r i are updated periodically, according to the monitoring frequency parameter defined by the DyeVC user. In the first execution of DyeVC over r i , \( {C}_i^{previous}=\varnothing \) and \( {C}_i^{current} \) is populated with all commits obtained directly from Git. In the following executions, \( {C}_i^{previous} \) is populated with the commits \( in\ {C}_i^{current} \) of the previous execution and \( {C}_i^{current} \) is again populated with all commits obtained directly from Git. Each branch b k ∈ B i is a tuple (name, c k ), where name is the name of b k and c k ∈ C is the tip (i.e., head) of b k . Consequently, H k ⊆ C contains all reachable commits of b k . With this foundation established, we can now formalize the process of updating commits in the topology. For a local repository r i ∈ R being monitored by DyeVC, the rare situations where a commit is deleted can be formally defined as: $$ {Del}_i={C}_i^{previous}\setminus {C}_i^{current} $$ Each locally deleted commit c ∈ Del i should be removed from C database if no other repository r ∈ R still contains this commit. Conversely, the new commits in r i ∈ R since the previous monitoring cycle can be formally defined as: $$ {New}_i={C}_i^{current}\setminus {C}_i^{previous} $$ Each locally added commit that is not already in the database (c ∈ New i \ C database) should be inserted in C database. This verification is necessary because some of the locally added commits might have already been inserted into the database by another instance of DyeVC. Moreover, we can formalize the identification of repositories that contain a specific commit and the repositories that are ahead or behind of a given repository. This information is necessary for building some of our visualizations. We formally define the repositories that contain a commit c j ∈ C database as: $$ {R}_j=\left\{{r}_i\in R|{c}_j\in {C}_{\mathrm{i}}^{current}\right\} $$ We formally define from which repositories r i is ahead or behind as: $$ {Ahead}_i=\left\{{r}_j\in {R}_i^{push}|\exists c\in {C}_i^{current}:c\notin {C}_j^{current}\right\} $$ $$ {Behind}_i=\left\{{r}_j\in {R}_i^{pull}|\exists c\in {C}_j^{current}:c\notin {C}_i^{current}\right\} $$ Finally, we can also formalize the commits that are ahead or behind two specific repositories and the branches in which a commit belongs. This relationship among commits and repositories/branches is also necessary for some of our visualizations. Considering two repositories r i , r j ∈ R, we formally define the commits ahead or behind r i regarding r j as: $$ {Ahead}_{i,j}={C}_i^{current}\backslash {C}_j^{current} $$ $$ {Behind}_{i,j}={C}_j^{current}\backslash {C}_i^{current} $$ Considering a given repository r i , we formally define the branches that a commit c j ∈ C belongs to as: $$ {B}_{i,j}=\left\{{b}_k\in {B}_i|{c}_j\in {H}_k\right\} $$ The computation of R j , Ahead i and Behind i is not expensive. The set of repositories R is usually small (one or few repositories per developer) and the complexity of the operation for checking if a commit belongs to a specific repository is O(1) (i.e., the complexity of checking if an element belongs to a hash-based set). So, we can say that the complexity of obtaining the relationship of commits and repositories is O(n), where n is the number of repositories in the project. We implemented our approach as a Java application launched via Java Web Start Technology. It currently monitors Git repositories, as it is the most used DVCS nowadays (Eclipse Foundation 2014). The source code and the link to download the tool via Java Web Start can be found at https://github.com/gems-uff/dyevc. The tool gathers information from repositories using JGit library,Footnote 2 which allows using our approach without having a Git client installed. Gathered information is stored in a central document database running MongoDB. We hosted our database on a free MongoDB instance provided by MongoLab. We did not use MongoDB proprietary API, which would demand opening specific ports to connect to MongoDB. Instead, we opted to use MongoLab's RESTful (Representational State Transfer) API. RESTful APIs (Fielding 2000) have the advantage of being available using standard HTTP and HTTPS protocols. In this way, our approach can be used in environments protected with firewalls without major problems. We implemented a MongoLab Provider to use this RESTful API, which translates the application methods into RESTful commands and vice-versa. It also serializes/deserializes the application objects to/from JSON (JavaScript Object Notation) representations to be used through the RESTful commands. A central document database was chosen because this way DyeVC instances can easily send and gather information. MongoDB was the chosen database because it is free, open-source and cross-platform. Besides, it has many features to improve performance and availability, such as document indexing, replication, and load balancing. Furthermore, it provides RESTful APIs, as cited before. We present the gathered information as a series of graphs by using the JUNG (Java Universal Network/Graph) library,Footnote 3 from which DyeVC inherits the ability to extend existing layouts and filters. All graphs present similar behavior, allowing the window to be zoomed in or out, whether the user wants to see details of a particular area or an overview of the entire graph. By changing the window mode from transforming to picking, it is possible to select a group of nodes and collapse them into one node, or simply drag them into new positions to have a better understanding of parts with too many crossing lines. To evaluate our approach, we first conducted a posthoc evaluation over the JQuery project,Footnote 4 an open-source project, aiming at checking if DyeVC can help answering questions Q1-Q3. Next, we conducted an observational evaluation involving four participants that used DyeVC. This evaluation also used the JQuery project. Finally, we ran DyeVC over some open-source projects of different sizes and from different sources, aiming at evaluating the scalability of our approach. Posthoc evaluation We conducted a posthoc evaluation using a real open source project to demonstrate that our approach can help in answering questions Q1-Q3. The selected project, JQuery, began in 2006 and had 6222 commits by the time of the evaluation. We reconstructed the repository history, simulating the actions that occurred in the past. We do not replicate the repository history here, due to its size, but it is publicly available on GitHub. Automatically generated comments helped us to depict specific flows. For example, the comment "Merge branch 'master' of https://github.com/scottjehl/jquery into scottjehl-master" tells us that there was a user named "scottjehl" and that the merge operation was done at a branch called "scottjehl-master". Although one might perform a merge manually and insert a different text in the comment, this did not compromise our analysis because we had a focus on depicting some of the merge situations, and not all of them. Due to the operating mode of Git, some details are missing, but these details do not compromise our analysis. The first one is the moment when a clone arises or deceases. This information does not exist anywhere in the repository. We inferred the creation of clones by looking at the commit messages (a commit by developer X led to the creation of a clone named X). Clones created at a given time stayed alive for the rest of the analysis. The second missing detail is that, although we had the commit dates and times in the repository history, these dates and times were not guaranteed to be correct. This occurs because DVCSs do not have a central clock. Each commit is registered with the local time on the machine where the clone is located, which could lead to commits in the history with a predecessor in the future, depending on when and where each commit was performed. This missing detail is not relevant, because the order of commits is not depicted using their times, but using the pointers that Git maintains from a commit to its parents, as discussed in section 3.1. We can use these dates, but not as an authoritative information. Finally, if rebases were conducted at the repository, this posthoc evaluation had no means to detect it, once a rebase consists on rewriting the local history for placing parallel commits on top of existing commits, consequently leaving no trail of the parallel work. This missing detail is not important for our evaluation as well, because this operation is done solely with the purpose of cleaning the repository, leaving its history easier to understand. However, other posthoc studies that intend to use DyeVC for finding all cases of parallel work should consider rebase as a potential threat to validity. We chose a moment in time when three developers were involved, performing commits and merging changes in the repository. We created three clones for these developers, named after their usernames: jeresig, adam, and aakosh. Figure 15 shows the topology view on Sep 24 2010Footnote 5 when aakosh had 121 commits pending to be pushed to the central repository (hereafter called central-repo). Figure 16 shows part of aakosh's commit history and how DyeVC represents commits pending to be pushed (green nodes). First monitored repository in Topology view (Sep 24 2010) aakoch's commit history showing commits pending to be pushed Later on, aakosh pushed his commits to central-repo . In the meantime, both adam and jeresig committed some changes. Before they pushed their work to central-repo, adam's last commit was on Jun 21, 2010, and jeresig's on Sep 27 2010. At this moment, we registered them to be monitored by DyeVC. Figure 17 shows the topology view after this registration on Sep 27 2010.Footnote 6 Here, we can see that aakoch was synchronized with central-repo, whereas adam and jeresig had pending actions. Three monitored repositories in Topology view (Sep 27 2010) At this point, we can revisit questions Q1 and Q2: Q1: Which clones were created from a repository? DyeVC's topology view (Fig. 17) shows all the clones where it is running, and also discovers other clones connected to them, even if it is not running there. Q2: What are the communication paths among different clones? DyeVC's topology view (Fig. 17) shows the dependencies between peers in the topology, as well as the number of commits ahead or behind in each of these clones. Adam had 121 commits to pull from central-repo, what is corroborated by the details of his tracked branches (master branch in Fig. 18a). He also had a non-tracked commit pending to be pushed. Non-tracked commits are not shown in the tracked branches view, but we can see them in gray in the commit history views. Fig. 18b shows the collapsed commit history for jeresig, where we can see adam's non-tracked commit with hash a2bd8. Adam's tracked branches (a) and collapsed commit history for repository jeresig (b) The repository history leads us to think that jeresig is a core developer of this project because he performed most of the merges to the master branch. Looking at Fig. 17, we see that he had 26 commits pending to be pushed to central-repo. These 26 commits can be seen at aakoch's commit history (Fig. 19) as red commits since they could not be pulled by aakoch until jeresig has pushed them to central-repo. There was also a commit in central-repo pending to be pulled by jeresig. If we look back at Fig. 18b, we see that the only yellow commit is a0887, made by aakoch. This tells us that jeresig pulled changes from central-repo just before aakoch pushed commit a0887. If we look at Fig. 20, we see that all pending commits (those that were pending to be pushed and pulled) are related to the same branch (master). This tells us that, if jeresig wanted to push these commits to central-repo, he would have to perform a pull operation before. Aakoch's commit history Jeresig's tracked branches This analysis helps us revisit and answer Q3: Q3: Which changes are under work in parallel (in different clones or different branches) and which of them are available to be incorporated into others' clones? New commits in tracked branches of peers can be easily found by looking at Level 3 information (tracked branches, shown in Fig. 18a and Fig. 20). This view shows to which branch these commits are related and how many new commits exist. If we want to look at each commit individually, we can look at Level 4 information (commit history, shown in Fig. 16 and Fig. 19) and notice the yellow nodes. Additionally, Level 4 information can be used to find new commits in repositories that are not peers (red nodes), or new commits in non-tracked branches (gray nodes). Observational evaluation We conducted an observational evaluation over the same project used in the posthoc evaluation (JQuery) to assess the capability of the visualizations provided by DyeVC in supporting developers and repository administrators. The evaluation was conducted with four volunteers, which had previous experience with DVCS. They were graduate students from the Software Engineering research area at Universidade Federal Fluminense (UFF). Four sessions were conducted, each of them with one subject. The goal of this observational evaluation was to analyze when DyeVC helps on understanding the project history better than existing tools. The evaluation was divided into two phases (without and with DyeVC), each one with two scenarios, where the subject had to answer questions related to usual work with DVCS. In Scenario 1, the subject played the developer role, working in a clone named aakoch. In Scenario 2, the subject played the repository administrator role. The following questions were posed: Q1.1 What is the status of your clone, compared to the central repository? Q1.2 Who else is working in the JQuery project (other clones)? Q1.3 Which files were modified in commit 5d454? Q2.1 What are the existing clones for JQuery project? Q2.2 Which clones are synchronized with the central repository? Q2.3 How many commits in tracked branches are pending to be sent to the central repository? Q2.4 Is there any commit in non-tracked branches? Where? In Phase 1 (without DyeVC), DyeVC was not in place, and the subject answered the questions using any desired DVCS client among the ones available in the computer used in the evaluation: gitk , Tortoise Git , Git Bash, and SourceTree. Participants were allowed to access the Internet and search any other procedure or tool that could help in answering the questions. After that, the subject watched a 10-min video presenting DyeVC and started Phase 2 (with DyeVC), which consisted of answering the same questions with the help of DyeVC. The possible answers in Phase 2 were either "keep the answer of Phase 1", meaning that using DyeVC did not change the subject perception, or a different answer, otherwise. Table 3 presents the time spent by each subject to answer each question in both scenarios and both phases. The values include the time to understand the question, investigate repositories with available tools, look for the answer, and write down the answers in the form. However, the values do not include the time spent filling the consent form and the characterization form, watching the video about DyeVC, and filling the exit questionnaire. It is possible to notice, by looking at Table 3, that all subjects took less time to complete Scenario 1 (developer role) in Phase 2 (with DyeVC). For Scenario 2 (admin role), none of the subjects managed to answer the questions in Phase 1 (without using DyeVC). For this reason, times shown in Phase 1 are the times spent by the subjects until they gave up finding an answer. Table 3 Time spent (in minutes) to answer each question In Phase 1, each subject used different ways to look for the answers. In Phase 2, subjects correctly used DyeVC to find the answers. Question 1.1 was answered using DyeVC Level 3 visualization (Tracked branches). Question 1.3 was answered using Level 4 visualization (Commit History). Finally, questions 1.2 and 2.1 through 2.4 were answered using Level 2 visualization (Topology). Almost all subjects answered all the questions similarly, except for subject P4 in question 1.2 from Phase 1. Subject P1 answered questions 1.1 and 1.3 in Phase 1 using the command line interface. To answer question 1.1, she looked at the log for both local and remote repositories, counted down how many hashes there were in each log and subtracted these numbers to find the answer. Question 1.3 was answered with git show command, which shows, for each affected file in the commit, what has changed. The answer to this question was easy to find because only one file was affected, but if many files had been affected, the subject would have trouble finding all affected files using this procedure. For questions 1.2 and 2.1 through 2.4, the subject tried to find a way to discover related clones by searching the Internet. After a few searches with no promising results, the subject gave up, and her answer was "I don't know". Once there was no answer to question 2.1, next questions in Scenario 2 could not be answered as well. Subject P2 answered question 1.1 by issuing the git status command. To answer question 1.3, she used Tortoise Git and walked through the commit tree until finding the desired commit. For questions 1.2 and 2.1 through 2.4, the subject answered that she did not know a way to find an answer. When answering question 2.1, the subject commented that, as a repository manager, she should know which were the existing clones and their relationships, but she did not have any resources available to accomplish that. Subject P3 answered question 1.1 by issuing a git status command (same as subject P2). To answer question 1.3, she used Tortoise Git but found the desired commit using the search feature of the tool, instead of walking through the commit tree. For questions 1.2 and 2.1 through 2.4, the subject answered that it was not possible to find an answer. Subject P4 answered questions 1.1 and 1.3 using SourceTree. This subject answered question 1.2 differently from the others. She wrote down each different author of each commit as if it was a different clone. Although this is a valid interpretation, it may happen that authors commit changes in the same clone, and this would lead to a wrong answer for this question. For questions 2.1 through 2.4, the subject answered that it was not possible to find an answer. The overall results of this evaluation were positive. In Phase 1 (without DyeVC), subjects were able to correctly answer questions Q1.1 and Q1.3 whether using DyeVC or not. Also, further questions were answered correctly only by using DyeVC. The subjects also answered an exit questionnaireFootnote 7 (Cesario 2015). All subjects found easy to interact with DyeVC, to identify related repositories, and to use the operations available. They consensually elected the topology visualization as the most helpful visualization in DyeVC. Also, by using Product Reaction Cards, three out of the four subjects stated that DyeVC is helpful and easy to use. Product Reaction Cards (Benedek and Miner 2003) have a large set of words, both positive and negative, used to check the emotional response of a product or design. We measured the time spent to perform the most common DyeVC operations to evaluate the scalability of our approach. We used projects of different sizes and hosted in different Git servers. Table 4 shows the monitored projects (name and hosting service), the repository metrics (the number of commits, disk usage, and the number of files) and the time spent to run some background and foreground operations in DyeVC. All measurements were taken in the same period of the day and from the same machine, a Core Duo CPU at 2.53 GHz, with 4GB RAM running Windows 8.1 Professional 64 bits, connected to the internet at 35 Mbit/s. Each operation was performed once for each repository, except for the repository registration, which was executed twice ("Insert 1st" and "Insert 2nd"), as detailed below. Table 4 Scalability results of DyeVC for repositories with different sizes We measured the main operations of our approach: "Insert 1st", invoked when the user registers the first repository of a given system to be monitored; "Insert 2nd", invoked when the user registers a repository to be monitored in a system that already has registered repositories; "Commit History", invoked when the user requests to see the commit history of a given repository; "Topology", invoked when the user wants to see the topology of repositories of a given system; "Check Branches", invoked periodically to check all monitored repositories, searching for ahead or behind commits; and "Update Topology", invoked periodically to update topology information in the central database. This last operation updates the existing repositories, their peers, and the existing commits, marking in which repositories each commit is found. It may be noted that the "Commit History" operation has no values for the last three repositories. This occurs because, as the number of commits increases, more memory is used to calculate the commit history graph. The current algorithm has an O(x 2 ) space complexity (being x the number of commits). The computer used in this evaluation was configured with a 2 GB maximum Java Heap Size, which let us analyze repositories with up to 6 K commits. This limitation occurs mainly because of JUNG. Table 5 shows the correlation between each repository size metric and the DyeVC operations' execution time, according to the Spearman's rank correlation coefficient (Spearman 1904). This correlation coefficient measures the monotonic relation between two variables and ranges from −1 to 1. Values of 1 or −1 mean that each variable is a perfect (increasing or decreasing) monotone function of the other. A value of 0 means that there is no correlation between the variables. Table 5 Spearman's rank correlation coefficient between repository size metrics and DyeVC operations time Looking at Table 5, it is possible to notice that, except for the "Check Branches" operation, all other operation times are strongly correlated to the number of commits and repository size. This is due to the nature of these operations, which update or show information about all commits in the repository. On the other hand, except for the "Commit History" operation, all other operation times correlate with the number of files. This is also expected due to the nature of "Commit History" operation, which does not dig into the changed files. However, it is possible to find some more tricky situations, which demonstrate that all three variables (number of commits, size, and number of files) should be taken into consideration when analyzing the performance of each DyeVC operation. One such situation is the one that occurs with Git Extensions, which has significantly fewer commits than repositories such as Git, but presents times for "Topology", "Insert 1st" and "Insert 2nd" operations in the same level of magnitude. This is because these operations are very I/O intensive. When a repository is registered to be monitored, DyeVC creates the working copy for that repository, as discussed in Section 3.1. Larger repositories will then take more time to perform these actions. Note in Table 4 that the size of Git Extensions is notably bigger than any other of the repositories used in the evaluation. Finally, it is worth mentioning that, even with all measurements taken in the same period of the day and from the same machine, short network latencies and processor usage peaks may have occurred, which affect the results. All in all, although we cannot affirm that DyeVC is scalable to all possible repositories, our evaluation helped us to identify the scalability limits of DyeVC. Without automatic collapses, DyeVC was able to process repositories with around 6500 commits. To put this number in perspective, 99% of 50,012 projects analyzed by Rainer and Gale (2005) had less than 3137 commits. Moreover, Kalliamvakou et al. (2014) indicate that 90% of the projects in GitHub have less than 50 commits. This shows that DyeVC is scalable for a large number of projects, although we still see space for improvements, as presented in the following section. Automatic collapsing evaluation We also studied the impact of the automatic collapsing algorithm in the "Commit History" operation performance. This evaluation was performed at a later time in comparison with the results obtained in the previous section. Consequently, the repository metrics are slightly different. The repository size, number of commits, and number of files are higher, as shown in Table 6. Table 6 Characterization of the repositories used in the evaluation of the automatic collapsing algorithm The design of the evaluation was as follows. First, the "Commit History" operation was performed without using the automatic collapsing. Afterward, sequential and parallel collapse strategies described in Section 3.2.5 were used to simplify the structure of the commit graph, collapsing the corresponding node structures. The execution of the sequential strategy was the first stage, and the parallel strategy was the second stage of each iteration of the automatic collapsing algorithm. Moreover, after each stage, running time and memory consumption were measured. The evaluation was executed in a Core i7 CPU at 2.00 GHz, with 16GB of RAM running Windows 7 64 bits. We evaluated the capability of the automatic collapsing algorithm to reduce the number of nodes in the commit graph without compromising the quality of the information shown in the graph (collapses are performed just for commits of the same type, i.e., same color, as discussed in section 3.2.4). Table 7 shows the reduction achieved after two iterations of the algorithm. The number of iterations was set to two due to empirical observation that there was almost no reduction after a second iteration. With two iterations, the algorithm can reduce the number of nodes by an average of 73% compared to the original graph. In some cases, such as Drupal or ExpressoLivre, which are repositories that we could not analyze before, the nodes reduction surpassed 90%, allowing us to visualize their commit history graph after the automatic collapsing process. Table 7 Reduction of the number of nodes by the automatic collapsing algorithm Furthermore, we analyzed the running time and memory consumption of the "Commit History" operation. In particular, data collected for repositories that were visualized before and after collapse are represented in Fig. 21 and Fig. 22 using boxplots. The figures show that the more collapse stages we execute, the less time is needed to represent the commit history, and the less memory is consumed for this purpose. This can be explained by the fact that the automatic collapsing algorithm is linear and very fast comparing to the subsequent visualization process, and speeds up the presentation of the commit graph. Using this method, significantly lower running times and memory consumption values are obtained, compared with values before the automatic collapsing. It was possible to visualize repositories with tens of thousands of nodes (Drupal and ExpressoLivre), which could not be represented before, without applying automatic collapsing process. Boxplot showing the running time of "Commit History" operation depending on the number of executed collapse stages Boxplot showing the memory consumption of "Commit History" operation depending on the number of executed collapse stages Git was the only repository that was not represented visually, even after the automatic collapsing. The main contributing reasons for this fact are its high number of nodes, its low nodes reduction rates, and its inherent complexity. In the case of Drupal and ExpressoLivre repositories, high nodes reduction rates seem to be influenced by a somewhat more linear structure of the commit graph. There are long chains of 2-degree nodes, corresponding to sequential work stages performed by one contributor. Instead, Git repository showed resistant to collapse. To explain what we mean by "intrinsic complexity" of Git, we identified some structures that prevent commit graph's reduction. An example is shown in Fig. 23. Given the current definition of the collapse operations, the whole structure cannot be reduced because the branches are not sequential chains of commits. Additional automatic collapsing heuristics that consider the possible dependencies between different branches seem necessary to accommodate these cases. Example of a structure that prevents automatic collapsing Threats to validity While we have taken care to minimize threats to the validity of the evaluations, some factors can influence the results. The usage of a posthoc evaluation to assess a real project may not reflect the exact sequence of events that occurred, although the outcome did not change. For example, when we say that aakosh, at some moment, had 121 commits pending to be pushed to the central repository, these commits could have been pushed at once or by a series of smaller pushes. Moreover, only one project was selected to perform the analysis, what imposes limitations from a generalization standpoint. Furthermore, we used an open source project to perform the posthoc evaluation, but the modus operandi of peers may be different in academic or industrial contexts. In the observation evaluation, the selection of subjects was made by asking for volunteers from students in the same research group of the author. This was necessary due to time and people restrictions. Therefore, this group might not be representative and can be biased. Moreover, there were few subjects in this evaluation. Thus, the results may have been influenced by the size and by specific characteristics of the group. Furthermore, subjects performed tasks involving DyeVC right after knowing the approach, giving no time to subjects to assimilate the tool. Results may have been influenced by this lack of time to mature the necessary knowledge to use the approach efficiently. Also, subjects could have answered questions in Phase 2 faster than in Phase 1 due to their learning regarding the scenario. Finally, there is a risk regarding the instrumentation used to measure the response times during the performance evaluation. As we used a database stored over the Internet, connectivity issues and network instability may have affected the response times. According to Diehl (2007), software visualization can be separated into three aspects: structure, behavior, and evolution. DyeVC relates primarily to the evolution aspect, more specifically with studies that aim at improving the awareness of developers that work with distributed software development. Steinmacher et al. (2012) present a systematic review of awareness studies, which we used to perform a forward and backward snowballing. The approaches obtained after the snowballing were divided into four groups. The first group ("Commit notification") includes approaches that notify commit activities. The second group ("Awareness of concurrent changes") comprises approaches that not only give the developer awareness of concurrent changes but also inform them about conflicts. The third group ("Repository visualization") includes approaches that visualize repository information. Finally, the fourth group ("DVCS clients") contains commercial and open source DVCS clients. The first group contains tools such as SVNNotifier,Footnote 8 SCMNotifier,Footnote 9 Commit Monitor,Footnote 10 SVN Radar,Footnote 11 Hg Commit MonitorFootnote 12 and Elvin (Fitzpatrick et al. 2006). The primary focus of these approaches is on increasing the developer's perception of concurrent work by showing notifications whenever other developers perform actions. The approaches in this group do not identify related repositories and do not provide information on different levels of details, such as status, branches, and commits. DyeVC provides these different levels of details, as shown in Section 3.2. The second group comprises approaches that give the developer awareness of concurrent changes, sometimes informing them if conflicts are likely to occur. This group includes tools such as Palantir (Sarma and van der Hoek 2002), CollabVS (Dewan and Hegde 2007), Crystal (Brun et al. 2011), Lighthouse (da Silva et al. 2006), FASTDash (Biehl et al. 2007), and WeCode (Guimarães and Silva 2012). Among these, only Crystal and FASTDash work with DVCSs. Crystal detects physical, syntactic, and semantic conflicts in Mercurial and Git repositories (provided that the user informs the compiling and testing commands), but does not precisely deal with repositories that pull updates from more than one peer. FASTDash does not detect conflicts directly, as the previously cited studies, but provides awareness of potential conflicts, such as two programmers editing the same region of the same source file in repositories stored in Microsoft Team Foundation Server. Although DyeVC primary focus is not to detect conflicts, it can be combined with such approaches to allow conflicts and metrics analysis over DVCS. The third group includes approaches that visualize repository information. Each approach has a different visualization focus, such as program structures (Collberg et al. 2003), classes (Lanza 2001), lines (Voinea et al. 2005), authors (Gilbert and Karahalios 2006), and branch history (Elsen 2013)Footnote 13 ,. Footnote 14 The latter have the same focus of DyeVC's Commit History visualization, but dealing only with the local repository, not showing, for example, where a given commit can be found in related repositories. Finally, the fourth group includes commercial/open source DVCS clients, which allows one to execute operations on repositories/clones (push, pull, checkout, commit, etc.) and also visualizing the repository history, i.e., the commits, along with their attributes (comment, date, affected files, committer, etc.). For example, some Git clients include gitk, Footnote 15 Tortoise Git, Footnote 16 EGit for Eclipse, Footnote 17 and SourceTree. Footnote 18 The data about commits shown by these tools varies, but usually involves the committer name, message, date, affected files, and a visual representation of the history. These tools, though, have no knowledge regarding peers. For this reason, they do not present commits from other clones and do not include information about where each commit can be found. It is worth noticing that we could not find any similar work showing the dependencies among several clones of a DVCS. Table 8 compares DyeVC with each group used to classify related work presented in this section, according to the following features: notifications (Which types of notification the approach supports?); CVCS (Does the approach support CVCS?); DVCS (Does the approach support DVCS?); related repositories (Does the approach identifies related repositories?); levels of detail (Does the approach present information in different levels of detail?); multiple peers (Does the approach support repositories with multiple peers, i.e., multiple pull / push destinations?); commits in peer nodes (Does the approach detects commits in peer nodes, i.e., nodes that have a direct communication path to each other?); commits in non-peer nodes (Does the approach detect commits in non-peer nodes, i.e., nodes that do not have a direct communication path to each other?); multiple branches (Does the approach support multiple branches in DVCS?); topology (Does the approach supply any topology visualization that shows dependencies among repositories?); and, finally, commit History (Does the approach allow visualizing only a partial commit history, showing only local commits, or does it allow visualizing a full commit history, including commits in other repositories that were not synchronized yet, or that are in non-tracked branches?). Table 8 Comparing DyeVC features with related work All in all, among related work, Crystal is the most similar to DyeVC and deserves a deeper comparison. Both approaches work with DVCSs (besides Git, Crystal also supports Mercurial) and use working copies to perform analyses, but there are major differences between them. Crystal's goal is to identify conflicts among pairs of repositories, whereas DyeVC's goal is to provide awareness regarding the existing peers and their synchronization, at different levels. To identify repositories, Crystal demands the user to point out all repositories they want to compare, whereas DyeVC requires that some of the repositories be registered and it automatically looks at configuration files to discover all the repositories that one pushes to or pulls from. The repository comparison in Crystal is from one repository against all the other together, whereas DyeVC analyzes each repository against each other, providing a pairwise view and a combined view of the history. Finally, the allowed actions in Crystal include the ability to push, pull, compile, and test a repository, whereas DyeVC allows one to visualize branches status, topology, and history. In this way, we see potential to have both tools working together to provide awareness and safety better when working with DVCS. Trace reduction methods and automatic collapsing Trace reduction is the compression of traces in some manner (either lossless or lossy) so that they can be stored and processed efficiently (Kaplan et al. 1999; Mohror and Karavanic 2009). The process of collapsing the commit graph can be seen as a particular case of trace reduction. Program analysis and software visualization communities have already proposed trace reduction methods (Kuhn and Greevy 2006; Cornelissen et al. 2008; Noda et al. 2012; Jayaraman et al. 2017). In (Kuhn and Greevy 2006) and (Cornelissen et al. 2008), a trace reduction technique is mentioned, which assigns consecutive events that have equal or increasing nesting levels to the same group. Particularly, method call sequences are summarized in one method-call chain (Kuhn and Greevy 2006). Also, compact sequence diagram generation is studied in (Noda et al. 2012) and (Jayaraman et al. 2017). To have a better sequence diagram representation of the program execution, Noda's method (Noda et al. 2012) abstracts the history of object interaction by grouping strongly correlated objects. These objects are compacted, achieving an appropriate reduction in the number of objects appearing in the sequence diagram, which results in compression of this diagram along the horizontal axis. Also, (Jayaraman et al. 2017) presents vertical and horizontal sequence diagram compaction techniques. For this end, one-to-one correspondence between call trees and sequence diagrams is used. A maximally compacted tree is obtained, generating smaller and more useful diagrams. These approaches work with method-call sequences and sequence diagrams, both having a tree structure. Unlike these works, when applying automatic collapsing, we deal with a commit graph, which is a DAG, with a high number of commit nodes. The structure of a graph is more rich and complex than the structure of a tree. In this paper, we presented DyeVC, an approach that identifies the status of a repository in contrast with its peers, which are dynamically found in an unobtrusive way. We have evaluated DyeVC on a real project, showing that it can be used to answer questions that arise when working with DVCSs. The observational evaluation results were promising: DyeVC was considered easy to use and fast for most repository history exploration operations while providing the expected answers. This provides initial evidence that DyeVC could effectively help developers and repository administrators by saving time and by supporting answering questions regarding DVCS usage that could not be answered before. We have also evaluated DyeVC's performance over repositories of different sizes, and we found out that the time and space complexity of the approach are directly related to the number of commits in the repository, especially in the view levels with finer granularity. Some future research topics arise from this work. DyeVC could gather additional metadata, for example, to create a visualization showing conflicts that would happen when merging two or more branches. This data could also be used to mine information in the repositories, revealing usage patterns or presenting metrics. Moreover, the formalization of DyeVC mechanics could be used to prove correctness properties of our implementation. Finally, some optimization should be done to allow DyeVC work with larger repositories with more complex branch structures. Dye is commonly used in cells to observe the cell division process. As an analogy, DyeVC allows developers to observe how a Version Control repository evolved over time. http://www.eclipse.org/jgit/ http://jung.sourceforge.net/ https://github.com/jquery/jquery Considering the scenario just after commit a088751a1b2c5761dab8de9d7da8602defb45b11. Considering the scenario just after commit ea6a4813b7d996f6f7af0b61a5f1bf4ab80b291d. The exit questionnaire can be found in Appendix G of the referenced Master's Thesis, which can be found at https://github.com/gems-uff/dyevc/blob/master/docs/dissertation.pdf. http://svnnotifier.tigris.org/ (2012) https://github.com/pocorall/scm-notifier (2012) http://tools.tortoisesvn.net/CommitMonitor.html (2013) http://code.google.com/p/svnradar/ (2011) https://bitbucket.org/dun3/hgcommitmonitor (2009) Visugit: https://github.com/hozumi/visugit GitHub's Network Graph: https://github.com/blog/39-say-hello-to-the-network-graph-visualizer http://git-scm.com/docs/gitk https://tortoisegit.org/ http://eclipse.org/egit/ http://www.sourcetreeapp.com/ CVCS: Centralized version control systems Directed acyclic graph DVCS: Distributed version control systems Hypertext transfer protocol HTTP secure JavaScript object notation JUNG: Java Universal network/graph RESTful: Representational State transfer Unified modeling language Version control systems Appleton B, Berczuk S, Cabrera R, Orenstein R (1998) Streamed lines: branching patterns for parallel software development, Pattern languages of programs conference (PLoP), p 98 Benedek J, Miner T (2003) Measuring desirability: new methods for evaluating desirability in a usability lab setting. In: Proceedings of usability professionals association (UPA), Orlando, p 57 Biehl JT, Czerwinski M, Smith G, Robertson GG (2007) FASTDash: a visual dashboard for fostering awareness in software teams. In: ACM conference on human factors in computing systems (CHI). ACM, San Jose, pp 1313–1322 Brun Y, Holmes R, Ernst MD, Notkin D (2011) Proactive detection of collaboration conflicts. In: ACM SIGSOFT symposium and European conference on foundations of software engineering (ESEC/FSE). ACM, Szeged, pp 168–178 Cederqvist P (2005) Version management with CVS. Free Software Foundation Cesario C (2015) Awareness over distributed version control systems. Master's thesis, Universidade Federal Fluminense - UFF Cesario CM, Murta LGP (2016) Topology awareness for distributed version control systems. In: Proceedings of the 30th Brazilian symposium on software engineering (SBES). ACM, Maringá, pp 143–152 Chacon S (2009) Pro Git, 1st edn. Apress, Berkeley Collberg C, Kobourov S, Nagra J, Pitts J, Wampler K (2003) A system for graph-based visualization of the evolution of software. In: ACM symposium on software visualization (SOFTVIS). ACM, San Diego, pp 77–ff Collins-Sussman B, Fitzpatrick BW, Pilato CM (2011) Version Control with Subversion. Compiled from r4849. O'Reilly Media, Stanford Cornelissen B, Moonen L, Zaidman A (2008) An assessment methodology for trace reduction techniques. IEEE International Conference on Software Maintenance, In, pp 107–116 Dewan P, Hegde R (2007) Semi-synchronous conflict detection and resolution in asynchronous software development. In: European conference on computer-supported cooperative work (ECSCW). Springer London, Limerick, pp 159–178 Diehl S (2007) Software visualization: visualizing the structure, behaviour, and evolution of software. Springer, Berlin, New York Eclipse Foundation (2014) 2014 annual eclipse community report. Eclipse Foundation, San Francisco Elsen S (2013) VisGi: Visualizing Git branches. In: IEEE working conference on software visualization (VISSOFT). IEEE, Eindhoven, pp 1–4 Estublier J (2000) Software configuration management: a roadmap. In: International conference on software engineering (ICSE). ACM, Limerick, pp 279–289 Fielding RT (2000) Architectural styles and the Design of Network-Based Software Architectures. Thesis, University of California Fitzpatrick G, Marshall P, Phillips A (2006) CVS integration with notification and chat: lightweight software team collaboration. In: ACM conference on computer-supported cooperative work (CSCW). ACM, Banff, pp 49–58 Gilbert E, Karahalios K (2006) LifeSource: two CVS visualizations. In: ACM conference on human factors in computing systems (CHI). ACM, Montreal, pp 791–796 Guimarães ML, Silva AR (2012) Improving early detection of software merge conflicts. In: Internation conference on software engineering (ICSE). IEEE Press, Zürich, pp 342–352 Gumm D-C (2006) Distribution dimensions in software development projects: a taxonomy. IEEE Softw 23:45–51 Jayaraman S, Jayaraman B, Lessa D (2017) Compact visualization of java program execution. Softw Pract Exp 47:163–191. doi:10.1002/spe.2411 Kalliamvakou E, Gousios G, Blincoe K, Singer L, German DM, Damian D (2014) The promises and perils of mining GitHub. In: Proceedings of the 11th working conference on mining software repositories. ACM, New York, pp 92–101 Kaplan SF, Smaragdakis Y, Wilson PR (1999) Trace reduction for virtual memory simulations. In: Proceedings of the 1999 ACM SIGMETRICS international conference on measurement and Modeling of computer systems. ACM, New York, pp 47–58 Kuhn A, Greevy O (2006) Exploiting the analogy between traces and signal processing. In: 22nd IEEE international conference on software maintenance, pp 320–329 Lanza M (2001) The evolution matrix: recovering software evolution using software visualization techniques. In: International workshop on principles of software evolution (IWPSE). ACM, Tokyo, pp 37–42 Mohror K, Karavanic KL (2009) Evaluating similarity-based trace reduction techniques for scalable performance analysis. In: Proceedings of the conference on high performance computing networking, storage and analysis. ACM, New York, pp 55:1–55:12 Noda K, Kobayashi T, Agusa K (2012) Execution trace abstraction based on meta patterns usage. In: 19th working conference on reverse engineering, pp 167–176 O'Sullivan B (2009a) Mercurial: the definitive guide, 1st edn. O'Reilly Media, Sebastopol O'Sullivan B (2009b) Making sense of revision-control systems. CACM 52:56–62 Perry DE, Siy HP, Votta LG (1998) Parallel changes in large scale software development: an observational case study. In: International conference on software engineering (ICSE). IEEE Computer Society, Kyoto, pp 251–260 Rainer A, Gale S (2005) Evaluating the quality and quantity of data on open source software projects. Proceedings of the 1st international conference on open source software Rochkind MJ (1975) The source code control system. IEEE Trans Softw Eng 1:364–470 Sarma A, van der Hoek A (2002) Palantir: coordinating distributed workspaces. In: 26th computer software and applications conference (COMPSAC). IEEE, Oxford, pp 1093–1097 da Silva IA, Chen PH, Van der Westhuizen C, Ripley RM, van der Hoek A (2006) Lighthouse: coordination through emerging design. In: Workshop on eclipse technology eXchange (ETX). ACM, Portland, pp 11–15 Spearman C (1904) The proof and measurement of association between two things. Am J Psychol 15:72–101. doi:10.2307/1412159 Steinmacher I, Chaves A, Gerosa M (2012) Awareness support in distributed software development: a systematic review and mapping of the literature. In: ACM conference on computer-supported cooperative work (CSCW). ACM, Seattle, pp 1–46 Tichy W (1985) RCS: a system for version control. Soft Pract Exp 15:637–654 Voinea L, Telea A, van Wijk JJ (2005) CVSscan: visualization of code evolution. In: ACM symposium on software visualization (SOFTVIS). ACM, Saint Louis, pp 47–56 Walrad C, Strom D (2002) The importance of branching models in SCM. IEEE Comput 35:31–38 Wloka J, Ryder B, Tip F, Ren X (2009) Safe-commit analysis to facilitate team software development. In: International conference on software engineering (ICSE). IEEE Computer Society, Vancouver, pp 507–517 We thank CNPq and FAPERJ for the financial support. CNPq and FAPERJ sponsored this work. The source code and the link to download DyeVC via Java Web Start can be found at https://github.com/gems-uff/dyevc. All projects used in the evaluations are available in their respective repositories, described in Table 4. Instituto de Computação, Universidade Federal Fluminense (UFF), Niteroi, RJ, Brazil Cristiano Cesario, Ruben Interian & Leonardo Murta Cristiano Cesario Ruben Interian Leonardo Murta CC contributed in the design and implementation of DyeVC and the design of the automatic collapsing algorithm and was responsible for running the posthoc, observational, and performance evaluations. RI contributed in the design and implementation of the automatic collapsing algorithm and was responsible for running the automatic collapsing evaluation. LM contributed in the design of DyeVC, the automatic collapsing algorithm, and the evaluations, and was responsible for the DyeVC formalization. All three authors contributed to writing the paper. All authors read and approved the final manuscript. Correspondence to Leonardo Murta. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Cesario, C., Interian, R. & Murta, L. DyeVC: an approach for monitoring and visualizing distributed repositories. J Softw Eng Res Dev 5, 5 (2017). https://doi.org/10.1186/s40411-017-0039-8 Accepted: 14 July 2017 Distributed version control 30th Brazilian Symposium on Software Engineering Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
If $A$ is a set with elements $a, \{b\},\{c\}$ then why $\{a,\{b\}\} \nsubseteq P(A)$, where $P(A)$ is the power set of $A$ A textbook on Elementary Set theory shows an example which says that:- given a set $A=\{a,\{b\},\{c\}\}$ find out if the statements are correct - $$\\ a).\ \ \ \{a,\{b\}\}\in P(A) \\ b). \ \ \ \{a,\{b\}\} \subseteq P(A)$$ Definition : Power set of any set $A$ is the set of all subsets of $A$, including the empty set and $A$ itself, i.e. $$\\ P(A) = \{\phi, \{a\},\{\{b\}\}, \{\{c\}\}, {\{a,\{b\}\}}, {\{a,\{c\}}\}, \{\{b\},\{c\}\}, \{a,\{\{b\},\{c\}\}\}\}$$ From the definition it is clear that the option $a$ is obviously true but the textbook claims that the second option $b$ is incorrect and here I am stuck. According to the definition $a, \{b\}\notin P(A)$, since $P(A)$ contains the set that consists of the elements $a, \{b\}$ and the set $\{a, \{b\}\}$ that contains the elements $a, \{b\}$ should be the subset of $P(A)$ and that is why the statement $\{a,\{b\}\}\subseteq P(A)$, should be true. But somehow it is incorrect. Would anyone kindly like to mention where am I wrong? Any help is highly appreciated. proof-verification elementary-set-theory Taroccoesbrocco vbmvbm $\begingroup$ I have found a similar video tutorial youtube.com/watch?v=y8arakl4WNM which also supports the claim of the book. $\endgroup$ – vbm Aug 3 '19 at 4:08 $\begingroup$ What do you mean when you write $a, \{b\}\nsubseteq P(A)$? $\endgroup$ – Taroccoesbrocco Aug 3 '19 at 4:11 $\begingroup$ I mean as elements they are not in $P(A)$ $\endgroup$ – vbm Aug 3 '19 at 4:14 $\begingroup$ So, you should write $a, \{b\} \notin P(A)$, and not: $a, \{b\}\nsubseteq P(A)$ . $\endgroup$ – Taroccoesbrocco Aug 3 '19 at 4:15 Your textbook is right, $\{a,\{b\}\} \not\subseteq \mathcal{P}(A)$. Indeed, $a$ is an element of $\{a,\{b\}\}$ but it is not an element of $\mathcal{P}(A)$ (see the list of the elements of $\mathcal{P}(A)$ that you correctly gave), thus it is not true that every element of $\{a,\{b\}\}$ is an element of $\mathcal{P}(A)$ (in general, given two sets $B$ and $C$, $B \subseteq C$ means that every element of $B$ is an element of $C$). So, when you say that "the set $\{𝑎,\{𝑏\}\}$ [...] should be the subset of $\mathcal{𝑃}(𝐴)$", you are wrong. You can say that $\{𝑎,\{𝑏\}\}$ is a subset of $A$. TaroccoesbroccoTaroccoesbrocco $\begingroup$ I got it. Thank you for your kind attention to my problem. $\endgroup$ – vbm Aug 3 '19 at 4:24 Not the answer you're looking for? Browse other questions tagged proof-verification elementary-set-theory or ask your own question. For every set $A$, the empty set is a subset of $A$. The empty set is a set. Therefore, the empty set has a cardinality $\geq 1\ldots$ Does there exist a non-empty set that is a subset of its power set? Prove the following statement: If $E$ is an empty set and $A \subseteq E$, then $A$ is an empty set. Prove that the power set of an $n$-element set contains $2^n$ elements Sets with all or none of the elements also being subsets - Cohn - Classic Algebra Page 11 The Null Subset of a Given Defined Set Is the empty set a neighborhood of itself? Is A a proper set of its power set? A set with $n$ elements has $2^n$ subsets; follows from the multiplication rule?
CommonCrawl
Reverse-engineering of gene networks for regulating early blood development from single-cell measurements Jiangyong Wei1, Xiaohua Hu2, Xiufen Zou3 & Tianhai Tian4 BMC Medical Genomics volume 10, Article number: 72 (2017) Cite this article Recent advances in omics technologies have raised great opportunities to study large-scale regulatory networks inside the cell. In addition, single-cell experiments have measured the gene and protein activities in a large number of cells under the same experimental conditions. However, a significant challenge in computational biology and bioinformatics is how to derive quantitative information from the single-cell observations and how to develop sophisticated mathematical models to describe the dynamic properties of regulatory networks using the derived quantitative information. This work designs an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The wanderlust algorithm is initially used to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression data in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression methods to smooth the gene express data in order to obtain pseudo-trajectories with much less fluctuations. The proposed integrated framework consists of both bioinformatics algorithms to reconstruct the regulatory network and mathematical models using differential equations to describe the dynamics of gene expression. The developed approach is applied to study the network regulating early blood cell development. A graphic model is constructed for a regulatory network with forty genes and a dynamic model using differential equations is developed for a network of nine genes. Numerical results suggests that the proposed model is able to match experimental data very well. We also examine the networks with more regulatory relations and numerical results show that more regulations may exist. We test the possibility of auto-regulation but numerical simulations do not support the positive auto-regulation. In addition, robustness is used as an importantly additional criterion to select candidate networks. The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations. The advances in omics technologies have generated huge amount of information regarding gene expression levels and protein kinase activities. The availability of the large datasets provides unprecidental opportunities to study large-scale regulatory networks inside the cell by using various types of omics datasets [1, 2]. The majority of the generated datasets are based on the measurements using a population of cells. However, biological experiments and theoretical studies have suggested that noise is a very import factor to determine the dynamics of biological systems [3–5]. In recent years, a number of experimental tools including single-cell qPCR, single-cell RNA sequencing, and multiplex single-cell proteomic methods, have been used for lineage tracing of cellular phenotypes, understanding cellular functionality, and high-throughput drug screening [6–8]. A centre theme in single-cell study is the highly heterogeneity at virtually all molecular levels beyond the genome [9]. The availability of large amount single-cell data has stimulated great interests in bioinformatics studies for analysing, understanding and visualizing single-cell data. A particular interesting research problem is the development of regulatory network models using single-cell observation data [10–12]. Mathematical methods for the analysis of single-cell observation data is mainly for normalization of experimental data, identification of variable genes, sub-population identification, differentiation detection and pseudo-temporal ordering [13]. These top-down approaches are mainly based on statistical analysis and machine learning techniques, and thus are able to deal with large-scale single-cell datasets [14]. For example, the algorithm Wanderlust represents each cell as a node and then ensemble cells into k-nearest neighbour graphs [15]. For each graph, it computes iteratively the shortest-path distance between cells. Another important type of approaches is the graphic model that represents the connection and/or regulations between genes and proteins. Different methods, such as the probabilistic graphic model, linear regression model, Bayesian network and Boolean network, have been applied to develop the regulatory networks [16–21]. One of the major challenges in computational biology is the development of dynamic models, such as differential equation models, to study the dynamic properties of genetic regulatory networks [17, 22, 23]. There are two major steps in designing a dynamic model, namely to determine the structure of network by specifying the connection and regulation between genes and proteins [19], and to quantify the strength of regulations [24]. In the last decade, a number of approaches have been applied to design dynamic models, including differential equation model, neural network model, petri-network model, and chemical reaction systems [25–29]. Recently we have proposed an integrated framework that combines both the probabilistic graphic model and differential equation model to infer the p53 gene networks that regulating the apoptosis process [30]. Regarding single-cell data, an additional step is to develop the pseudo-temporal trajectory by assuming each cell is uniformly distributed at different time points of the evolutionary process. A number of methods have been developed to analyse single-cell data [15, 31–35]. For example, The dimensionality reduction methods and cellular trajectory learning technique have been used to reverse-engineer the regulatory network by using differential equation based models [31]. In addition, the Single Cell Network Synthesis(SCNS) toolkit has been designed to develop regulatory network using single-cell experimental observation [36, 37]. Although these methods use either logistic models or dynamic models to infer genetic networks, a number of issues still remains, such as inference of network structure, development of appropriate dynamic models, and estimation of model parameters in the dynamic model. To address these issues, this work propose a novel approach to reverse-engineer gene networks using single-cell observations. To get pseudo-temporal ordering of single cells, we first use a method of dimensionality reduction, namely diffusion maps [36, 37], to get the lower dimensional structure of gene expression data, and then use the wanderlust algorithm [15] to order single cells according to their relative position in the cell cycle. Since there is substantial noise in the generated pseudo-trajectory data, the Gaussian processes regression method is used to smooth the expression data [38]. Then the GENIE3 algorithm is employed to infer the structure of gene regulatory network [39]. Using single-cell quantitative real-time reverse transcription-PCR analysis of 33 transcription factors and additional marker genes in 3934 cells with blood-forming potential, we develop a graphic model for the network of 40 genes and dynamic model for a network of nine genes. A recent experimental study used the single-cell qPCR technique to identify the expression levels of 46 genes in 3934 single stem cells that were isolated from the mouse embryo [37]. The Flkl expression in combination with a Runx1-ires-GFP report mouse was used to measure cells with blood potential at distinct anatomical stages across a time course of mouse development. Single Flkl + cells were flow sorted at primitive streak (PS), neural plate (NP) and head fold (HF) stages. In addition, the E8.25 cells were subdivided into putative blood and endothelial populations by isolating GFP + cells (four somite, 4SG) and Flkl + GFP − cells (4SFG −), respectively. Cells were sorted from multiple embryos at each time point, with 3934 cells going to subsequent analysis. Experimental study quantified the expression of 33 transcriptional factors involved in endothelial and hematopoietic development, nine marker genes, including the embryonic globin Hbb-bH1 and cell surface marker such as Cdh5 and Itga2b (CD41), as well as four reference housekeeping genes (i.e. Eif2b1, Mrpl19, Polr2a and UBC). We select 40 genes from this dataset but exclude four house-keeper genes and two other genes (i.e. HoxB2 and HoxD8) because the variations in the expression levels of these fix genes are relative small. The dCt values in Supplementary Table 3 [37] represent the relative gene expression levels. These values will be employed as experimental data to reverse-engineer the regulatory network of the remaining 40 genes in this work. Note that the majority of these dCt values are negative. It would be difficult to use a dynamic model to describe the data with negative values. Therefore we conduct a shift computation by assuming that the minimal dCt value is zero in "Mathematical modelling" subsection. Pseudo-temporal ordering The process of pseudo-temporal ordering can be divided two major steps. The first step uses Diffusion Maps for lower-dimensional visualization of high-dimensional gene expression data, then the Wanderlust algorithm is employed to order the individual cells to get the trends of different genes. Diffusion Map is a manifold learning technique for dealing with dimensionality reduction by re-organizing data according to their underlying geometry. It is a nonlinear approach of visual exploration and describes the relationship between individual points using lower dimension structure that encapsulates the data [31]. An isotropic diffusion Gaussian kernel is defined as $$ W_{\varepsilon}(x_{i},x_{j})=\exp\left(-\frac{\|x_{i}-x_{j}\|^{2}}{2\varepsilon}\right), $$ where \(x_{i}=\left (x_{i}^{(1)},\ldots,x_{i}^{(D)}\right), i=1,\ldots,N\), is the expression data of gene i, D is the number of single-cell, ∥·∥ is the Euclidean norm, and ε is an appropriately chosen kernel bandwidth parameter which determines the neighbourhood size of points. In addition, an N×N Markov matrix normalizing the rows of the kernel matrix is constructed as follows: $$ M_{ij}=\frac{W_{\varepsilon}\left(x_{i},x_{j}\right)} {P(x_{i})}, $$ where P(x i ) is an normalization constant given by \(P(x_{i})=\sum _{j} W_{\varepsilon }(x_{i},x_{j})\). M ij represents the connectivity between two data points x i and x j . It is also a measure of similarity between data points within a certain neighbourhood. Finally we compute eigenvalues and eigenvectors of this Markov matrix, and choose the largest d eigenvalues. The corresponding d eigenvectors are the output as the lower dimensional dataset Y i ,i=1,…d. Using the generated low dimensional dataset, we use the Wanderlust algorithm to get a one-dimensional developmental trajectory. There are several assumptions about the application Wanderlust to sort gene expression data from single cells. Firstly, the data sample includes cells of entire developmental process. In addition, the developmental trajectory is linear and non-branching, it means that the developmental processes is only one-dimensional. Furthermore, the changes of gene expression values is gradual during the developmental process, and thus the transitions between different stages are gradual. Based on these assumptions, we can infer the ordering of single cells and identify different stages in the cell development by using the Wanderlust algorithm. The Wanderlust algorithm also consists of two major stages, namely an initiation step and an iterative step for trajectory detection. In the first stage, we select a set of cells as landmarks uniformly at random. Each cell will have landmarks nearby. Then we construct a k-nearest-neighbours graph that every cell connect to k cells that are most similar to it, then we randomly pick l neighbours out of the k-nearest-neighbours for each cell and generate a l-out-of- k-nearest-neighbours graph (l-k-NNG). Then the second stage begins for the trajectory detection. One early cell point s should be selected first in this algorithm, which serves as the starting point of psuedo-trajectory. The point s can be determined by the Diffusion Maps method in the first. For every single cell, the initial trajectory score can be calculated as the distance from the starting-point cell s to that cell. For each cell t with early cell s and landmark cell l, if d(s,t)<d(s,l), then t precedes l, otherwise t follows l in the pseudo-trajectory. For each landmark cell, we define a weight as $$ w_{l,t}=\frac{d(l,t)^{2}}{\sum_{m} d(l,m)^{2}}, $$ and the trajectory score for t is defined as $$ {Score}_{t} = \sum_{l}\frac{d(l,t)}{n_{l}}w_{l,t}, $$ where n l is the number of landmark cells and the summation is over all landmarks l. The scores also include the beginning cell and landmarks. Then we use trajectory score as a new orientation trajectory and repeat the orientation step until landmark positions converge. Data smooth The pseudo-trajectory gene expression data determined by the Wanderlust algorithm have a large variations in the gene expression levels. Thus we use the Gaussian processes regression method for smoothing the noisy data. The Gaussian processes regression is a generative non-parametric approach for modelling probability distributions over functions. It begins with a prior distribution and updates this distribution when data points are observed, and finally produces the posterior distribution over functions [38]. Assume that we have the ordered data \({\mathcal {D}} =\{\mathbf {t,y}\}\). The observation y can be regarded as samples of random variables with the underlying distribution function f(t), which is described by a Gaussian noise model: $$\mathbf{y}= f(\mathbf{t})+\boldsymbol{\epsilon},\boldsymbol{\epsilon}\sim N\left(\mathbf{0},\sigma^{2} {I}\right). $$ We want to make prediction of the system state y ∗ at a point t ∗ based on the above model. The joint distribution of y and y ∗ is: $$\left[ \begin{array}{c} \mathbf{y}\\ y^{*} \end{array}\right] \sim N\left(0, \left[\begin{array}{cc} K & K_{*}^{T}\\ K_{*} & K_{**} \end{array}\right] \right) $$ Here K is a kernel trick to connect two observations. One popular choice for the kernel is the squared exponential covariance function, defined by $$K\left(t,t^{\prime}\right)=\sigma^{2} \exp\left[\frac{-\left(t-t^{\prime}\right)^{2}}{2l^{2}}\right] $$ where σ 2 is a signal variance parameter and l is a length scale parameter. If t≈t ′, it means that f(t) is highly correlated with f(t ′). However, if t is distant from t ′, K(t,t ′)≈0, the two points is largely uncorrelated with each other. Finally the posterior distribution is given by y ∗|y∼N(μ,Σ) with: $$ \begin{aligned} \boldsymbol{\mu} &= K_{*}\left[K+\sigma^{2} I\right]^{-1}y\\ \boldsymbol{\Sigma} &= K_{**}-K_{*}\left[K+\sigma^{2} I\right]^{-1}K_{*}^{T} \end{aligned} $$ Networks construction In this work the GENIE3 algorithm will be employed for reconstruction of regulatory network using the determined pseudo-temporal trajectory based on single-cell data in [40]. Instead of considering the regulation in a whole network, this method studies the inference accuracy of N genes separately by using the regression model. In this regression model, the expression level of a particular gene is described by the regulatory function that is determined by the expression of the other N−1 genes, given by $$ x_{t+1}^{(j)}=f_{j}\left(\boldsymbol{x_{t}}^{(-j)}\right)+\epsilon, $$ where x (−j)=(x (1),…,x (j−1),x (j+1),…,x (N))T, ε is a random noise with zero mean, and function f j is designed to search for genes that regulate the expression of gene x (j) using a random forest method. The function only exploits the expression in x −j of the genes that are direct regulators of gene j, i.e. genes that are directly connected to gene j in the targeted network. For each gene, a learning sample is generated with the expression levels of that gene as the output by using and expression levels of all other genes as input. A function is learned from the data and a local ranking of all genes except j is computed. The N local rankings are then aggregated to get a global ranking of all regulatory links. The nature of function f j is unknown but they are expected to involve the expression of several genes (combinatorial regulation) and to be non-linear. Each subproblem, defined by a learning sample, is a supervised (non-parametric) regression problem. Using square error loss, each problem amounts at finding a function that minimizes the following error: $$ \sum_{k=1}^{N} \left(x_{k}^{j}-f_{j}\left(x_{k}^{-j}\right)\right)^{2}. $$ Note that, depending of the interpretation of the weight, the aggregation to get a global ranking of regulatory links is not trivial. It requires to normalize each expression vector appropriately in the context of this tree-based method. We have designed a modelling framework by using differential equations to study the genetic regulations based on microarray gene expression data [30]. This general approach is be extended to develop dynamic models using single-cell expression data. Using the notations introduced in previous subsection, the expression levels of the i-th gene is denoted as x i (t) at time t. The evolution of gene expression levels is described by the following dynamic model using differential equations, given by $${} \frac{d x_{i}}{dt}=c_{i}+k_{i}f\left(x_{1},\ldots,x_{N}\right)-d_{i} x_{i}, \qquad i=1,\ldots,N $$ where c i and k i are the basal and maximal synthesis rate of gene i in gene expression, respectively, d i is the decay rate of transcript. The key point is the selection of the regulatory function, which should include both positive and negative regulations appropriately. Based on the Shea-Ackers' formula [41], this work uses the following function $$ f_{i}=\frac{a_{i1}x_{1}(t)+\ldots+a_{iN}x_{N}(t)}{1+b_{i1}x_{1}(t)+\ldots+b_{iN}x_{N}(t)} $$ Coefficients a ij represents regulation from gene j to the expression of gene i. This regulation may be positive (a ij >0) or negative (a ij =0) if the corresponding coefficient (b ij >0). For example, if a ii >0, a larger value in the expression level of gene j will promote the expression level of gene i. However, if a ij =0, a higher expression level of gene j will only increase the value of denominator of the regulatory function and thus decrease its value. This model assumes that, if b ij =0, then the coefficient a ij must be zero. In addition, it may be no regulatory relationship from gene j to gene i if (a ij =b ij =0). Since there is no time scale for experiment, is is assumed that each cell in the pseudo-trajectory is a unit time. Thus the time period of model is [ 0,3933] since there are 3934 single cells. Note that the proposed model (7) is based on the assumption that the expression levels x i (t)≥0. However, the majority of the experimental data are negative. To address this issue, we conduct a linear transformation in order to change the negative values of dCt to positive values, denoted as dCt 1. For each gene, we assume the minimal value of the dCt values is zero and the shift computation is $$dCt_{1} (gene\; i)= dCt_{0}(gene\;i) +|min(dCt_{0}(gene\;i))| $$ It is clear that the transformed value dCt 1 is always non-negative. We use the Approximate Bayesian Computation (ABC) rejection sampling algorithm [42, 43] to search for the optimal model parameters. The uniform distribution is used as the prior distribution of the unknown parameters. Since the value of dCt 1 may be quite large, we use the relative absolute error the measure the difference between simulations and experimental data, given by $$ E=\sum_{i=1}^{N}\sum_{j=1}^{M} \frac{|x_{ij}-x_{ij}^{*}|}{\max_{j}{\left\{x_{ij}^{*}\right\}}}, $$ where \(x_{ij}^{*}\) and x ij are the simulated and experimentally measured gene expression levels at time point t j for gene i, respectively. Due to the large number of single cells, which leads to a long range of time period for numerical simulation of model (7), the proposed model is stiff and simulation frequently breaks down when the search space is not very small. Thus instead of obtaining simulation of the whole time interval, we consider the numerical solution over k pseudo-time points and calculate the transfer probability $$\begin{aligned} L(\theta)=f_{0}\left[(t_{0}, x_{0})|\theta\right]\prod_{i=1}^{N/k} f\left(\left[\left(t_{ki},x_{ki}\right)|\left(t_{k(i-1)+1},x_{k(i-1)+1}\right); \theta\right]\right), \end{aligned} $$ In this work we choose k=100 and assume that f 0[(t 0,x 0)|θ]=1 and the transitional probability is calculated by using the absolute error kernel, given by $$f\left(\left[\left(t_{ki},x_{ki}\right)|\left(t_{k(i-1)+1},x_{k(i-1)+1}\right); \theta\right]\right)=\frac{1}{E_{i}} $$ where E i is the simulation error (9) using (t k(i−1)+1, x k(i−1)+1) as the initial condition. Since different sets of estimated model parameters may generate simulations with similar simulation error, we use the robustness property of the mathematical model as an additional criterion to select the estimated model parameters [44, 45]. The detailed computing process of robustness analysis can be found in [30]. For each module of gene regulation, we use the ABC algorithm to generate a number of sets of model parameters, and then select the top 5 sets that have the minimal estimation error for robustness analysis. In this way, we are able to exclude the influence of simulation error on the robustness property of the model. Diffusion map visualization Using the single cell data, a Diffusion Map algorithm is first employed to visualize the dataset [31]. The purpose of this step is to reduce the dimension of dataset and provide the pattern of the data in the three-dimensional space. Generally we choose three eigenvectors of the kernel matrix (2) for visualization. These three eigenvectors are those that have the largest eigenvalues. Figure 1 gives the diffusion coordinates for the first, second and third largest eigenvalues. It shows that all the data points in the development trajectory form only one branch. This result shows that the single cell dataset is appropriate for generating a pseudo-temporal trajectory by using the Wanderlust algorithm. This analysis also provides a single cell that will be used the first cell in the development of the pseudo-temporal trajectory. Diffusion map of 3934 single cells using the qPCR data Wanderlust ordering After determining the starting cell s in the previous section, the Wanderlust algorithm is then employed to obtain the pseudo-temporal trajectory for the expression dynamics of genes. In this program, the widely used Euclidean measure is used to calculate the distance between different cells. Figure 2 provides the determined pseudo-temporal trajectory of four genes based on the raw dCt data in [31]. Based on the generated pseudo-temporal trajectory, we find that the expression levels of the four housekeeping genes are barely changed. Similar observations can also be applied to the expression levels of genes HoxB2 and HoxD8. Therefore we will not consider these six genes in the network development. Pseudo-expression trajectory of four genes using raw qPCR data and smoothed data (solid-bold line) The pseudo-temporal trajectory has large variations in the expression levels of every gene. Thus it is very difficult to use differential equation model to realize such data with large variation. To address issue, the Gauss process regression method is used to remove the variations in the data and produce more smooth trajectory. Figure 2 also provides the pseudo-temporal trajectory after the smooth process for the four genes. Compared with the raw dCt data with large variations, the smoothed data for the same gene have much smaller fluctuations in the expression levels. One characteristics of the expression levels is that all genes (excluding the six genes) have both high and low expression levels. Genetic switching exists in the expression levels. Based on the time interval with high or low expression levels, the processed data can be classified into a number of patterns. For example, gene Cdh1 has high expression levels in the pseudo-time interval [ 0,500] and low expression levels in the following time interval. However, genes Gfi1b and Tal1 have low expression level in the pseudo-time interval [ 0,2500] and [ 0,800], respectively, but the expression level is switched to high levels in the following time intervals. A few of genes, such as gene Kdr in Fig. 2, have two genetic switchings in the pseudo-temporal trajectory. Since our proposed technique has been used to realize the similar observations for genetic switching [45], this modelling approach will be used in this work to realized the pseudo-temporal trajectories. With the availability of pseudo-temporal trajectory based on single-cell data, we then reverse-engineer the network structure for the regulatory network of 40 genes. The GENIE3 algorithm is used to develop a graphic model of these 40 genes. For each gene, a linear regression model is used to infer the regulation of other 39 genes to the expression of that gene. Then we can obtain 39 weight values for the possible regulation strength for each gene and the total number of genetic regulation strength is 1560 (=39×40). In order to maintain a certain number of regulations to each gene, we select the top 10 weight values for each gene. In this way the number of selected regulation values is only 400. Since a network with 400 undirected edges is still very dense, we set up a threshold value to select the top regulation strength. An edge is selected if its weight is larger than the threshold value. We have tested different values of the threshold value by decrease the threshold value gradually. The optimal threshold value is determined if the number of remaining edges is relative small but all genes should be connected to the network. The constructed network is given in Fig. 3. In this network there are 100 regulatory connections between these 40 genes. Among these 40 genes, the maximal number of connection edges for a gene is 11; while the minimal number of edge for a gene is 1. The average number of edge per gene is 2.5 and on average each gene connects to five other genes, Regulatory structure of network with 40 genes that is generated by the GENIE3 algorithm We have developed a mathematical model to describe the dynamics of the network with 40 genes. However, numerical results suggest that it is difficult to use a differential equation model to simulate the expression dynamics of 40 genes. The mathematical model includes a large number of model parameters that should be estimated. Due to the complexity of searching space, the simulation error is large. In addition, the genetic switching in the observation data makes the designed ODE model is stiff. Therefore we consider a small network with a less number of genes. We compare the designed graphic model in Fig. 3 and the model in Figure 3C in [37], and then select nine genes, namely GATA1, Gfi1b, Hhex, Ikaros, Myb, Nfe2, Notch1, Sox7 and Sox17. The regulation relationship between these nine genes in these two networks are consistent. However, the regulation between these nine genes in our graphic model in Fig. 3 is not a fully connected network, thus the regulation between the gene pair (it Sox7, Sox17), which exists in the network in [37], was added to Fig. 4 in order to form a complete network. The developed network model is presented in Fig. 4. Graphic model for the regulatory network with nine genes. All the regulations (except that for gene pairs Sox7 and Sox17) are derived from the network in Fig. 3 The nine genes selected in Fig. 4 is divided into two groups based on their expression patterns. In these nine genes, there are five genes, namely GATA1, Gfi1b, Ikaros, Myb, Nfe2, whose expression are activated at the pseudo-time point t=∼2300 and their expression activities are promoted a high level with different speeds for different genes. However, the expression of the remaining four genes, namely Hhex, Notch1, Sox7 and Sox17, is first inhibited and their expression levels go down to a low level at he pseudo-time point t=∼700; but their expression is activated at t=∼2500 and the expressions return to the high levels again. The observed gene expression changes are consistent with other experimental observations showing that genes GATA1, Gfi1b and Ikaros do have substantial changes of the expression levels over time [45]. The genetic switching in the expression levels of these genes is important to maintain the functional activities of blood stem cells. Using the proposed modelling method in [45], it is assumed that some key model parameters are variables of time rather than a constant. For the five genes in the first group, we use the following synthesis rate for gene expression, given by $$k_{i}=\left\{ \begin{array}{ll} k_{i0}, & t<2500\\ k_{i0}*A_{i} & else \end{array}\right. $$ where k i0 is the basal synthesis rate of gene i, and A i >1. Regarding the four genes in the second group, it is assumed that both synthesis rate and degradation rate are variables of time, since we need to realize the first switching from high expression level to low expression level, $$k_{i}=\left\{ \begin{array}{ll} k_{i0}*A_{i}, & 500<t<1000\\ k_{i0} & else \end{array}\right. $$ $$d_{i}=\left\{ \begin{array}{ll} d_{i0}*A_{i}, & 2500<t<3000\\ d_{i0} & else \end{array}\right. $$ Our simulation results suggest that the proposed ODE system is still even when an implicit method with very good stability property is used for numerical solution of the proposed model. Numerical simulation will break down if we try to find the solution over a relatively long pseudo-time interval. Therefore we have to separate the whole time interval into a number of subintervals, and in each subinterval, we use the experimental observation data as the initial condition to generate solution of the subinterval. Figure 5 shows simulation results using a fixed time period of 100 unit time. We have also examined other lengths of time period, namely t=50 and t=200. Numerical results are consistent with those showing Fig. 5. In addition, the ABC algorithm is used to infer the unknown model parameters using the data shown in Fig. 2. Figure 5 suggests that the proposed model is able to match experimental data very well. Certainly the simulation error is dependent on the length of subinterval. The simulation error is larger if the length of subinterval is larger. Expression levels of four genes based on the network in Fig. 4 (solid-line: the experimental data obtained by Gaussian process regression; dash-line: expression levels predicted by mathematical model) Inference of network with more regulations The proposed network in Fig. 4 includes nine one-way or mutual regulations. It is fully consistent with the network predicted in [37]. To make predictions about the potential regulations among these nine genes, we extend the network by including more regulations. We apply the GENIE3 algorithm to the raw dCt data of these nine genes only. According to the calculated weight of the target edges, we select the highest weight of 27 one-way regulations. Since there are a few two-way regulations in the selected 27 regulation edges, the generated network includes 17 un-directional regulations (see Additional file 1). Compared with the network in Fig. 4, the number of potential regulation edges has been doubled. The structure of the extended network is shown in Additional file 1: Figure S1 in Supplementary Information. Note that the regulations in Additional file 1: Figure S1 do not include all the regulations in Fig. 4. The added regulation (Sox 7, Sox 17) in Fig. 4 does not appear in Additional file 1: Figure S1. However, the other eight regulations in Fig. 4 also appears in Additional file 1: Figure S1. For this extended network, we also use the ABC algorithm to infer model parameters using the modified dCT values. Simulation results of four genes are presented in the Fig. 6. Numerical results for the total simulation error suggest that the extended network (see Fig. 7 b, index 1) has better accuracy than the network in Fig. 4 (Fig. 7 a, index 1). Note that the model based on a network with more regulations has more model parameters, which gives more flexibility to match the experimental data. Thus it is reasonable that the model based on network in Additional file 1: Figure S1 has better accuracy than that in Fig. 4. However, this simulation result suggests that, compared with the network in Fig. 4, more regulations may exist. Expression levels of four genes based on the extended network in Additional file 1: Figure S1 (solid-line: the experimental data obtained by Gaussian process regression; dash-line: expression levels predicted by mathematical model) Accuracy of the inferred network models without and with positive auto-regulations. a The accuracy of the inferred network models based on the network in Fig. 6. b The accuracy of the inferred network models based on the network in Additional file 1: Figure S1. (model index: 1. Network without aotu-regulation, 2 ∼ 10. Network with auto-positive regulation for genes Gata1, Gfi1b, Hhex, Ikaros, Myb, Nfe2, Notch1, Sox17, Sox7, respectively, 11. All nine genes have positive positive auto-regulation Inference of network with auto-regulation In the graphic model generated by the GENIE3 algorithm, the auto-regulation, namely the positive or negative regulation of a gene to the expression of itself, is not considered. To find the potential auto-regulations in these genes, we test the network by adding positive auto-regulation to a particular gene. For the i-th gene, we set b ii >0; and the value of a ii is a ii >0 for positive auto-regulation. We first test the gene network shown in Fig. 4 with positive auto-regulation for only one gene. Numerical results in Fig. 7 a suggest that no module with positive auto-regulation (model index 2 ∼ 10) has smaller simulation error than that of the network without any auto-regulation (index 1 in Fig. 7 a). We have also tested the network in which each gene has positive auto-regulation. Results in (index 11 in Fig. 7 a) shows that simulation error of this module is larger than that of any other module in Fig. 7 a. Thus it is unlikely that all these genes have positive auto-regulation. We have also conducted the positive auto-regulation test for the model based on the network in Additional file 1: Figure S1. An interesting result is that, compared with the network without auto-regulation (index 1 in Fig. 7 b), the model with positive auto-regulation to any one gene (model index 2 ∼ 10) has better accuracy than the network model without auto-regulation. Note that, compared with the network model without auto-regulation, the network model with positive auto-regulation to one gene has only two additional model parameters. The small change in the number of unknown parameters would not bring much flexibility to match experimental data. Thus this result suggests that there may be positive auto-regulation for some genes in this network. However, when we add auto-regulation to all genes (index 11 in Fig. 7 b), similar to the result in (index 11 in Fig. 7 a), the simulation error of this module is larger than any other module. This gives further evidence that it is unlikely that all these genes have positive auto-regulation. Robustness analysis We also test the robustness property of the developed models and the models with positive auto-regulations. We first use the estimated optimal parameter set to generate one simulation which is regarded as the exact simulation of the model without any perturbation. Then all the model parameters are perturbed by using a uniformly distributed random variable, and perturbed simulations are obtained using the perturbed model parameters. We generate 1000 sets of perturbed simulations and calculate the mean and variance of the simulation error for the perturbed simulation over the unperturbed simulations. For the gene network shown in Figs. 4 and 8 a suggests that the models with auto-regulation for genes Gata1, Gfi1b, Hhex have better robustness property than the model without any perturbation. However, for the gene network with 17 regulations in Additional file 1: Figure S1, the networks with auto-regulation for genes Sox7, Sox17 have better robustness property than the network without positive auto-regulation. These simulation results do not provide strong evidence to support the positive auto-regulation in the nine genes in Fig. 4. Robustness property of various models for the gene network with nine genes. a Robustness for the gene network in Fig. 4. b Robustness for the gene network with 17 regulations (Model index: 1: netwok without any auto-regulation, 2:10: positive auth-regulation for gene Gata1, Gfi1b, Hhex, Ikaros, Myb, Nfe2, Notch1, Sox17, Sox7, respectively, 11: all nine genes have positive auto-regulation). For each model, the first bar is the mean of error; while the second bar is the variance of error In this work we have designed an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The diffusion map method is firstly used to obtain the visualization of gene expression data derived from 3934 stem blood cells. The wanderlust algorithm is then employed to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression levels in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression method to smooth the gene express data in order to obtain pseudo-trajectory with much less fluctuations. The proposed integrated framework consist of both the GENIE3 algorithm to reconstruct the regulatory network and a mathematical model using differential equations to describe the dynamics of gene expression. The developed approach is applied to study the network regulating early blood cell development, and we designed a graphic model for a regulatory network with forty genes and a differential equations model for a network of nine genes. The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations. In this work we use simulation error as the key criterion to select the model parameters and infer the regulation between genes. However, because of the complex searching space of model parameters and noise in experimental data, it may be difficult to judge which model is really better than others if the difference between simulation errors is small. In fact, simulation errors of various models for the network of nine genes are quite close to each other. Therefore, in addition to using simulation error as the unique criterion to select a model, other measurements, such as AIC value, parameter identifiability and robustness property of a network, are also needed as important criteria. All of these issues are potential topics for future research. Taniguchi Y, Choi PJ, Li GW, Chen H, Babu M, Hearn J, Emili A, Xie XS. Quantifying E. coli proteome and transcriptome with single-molecule sensitivity in single cells. Science. 2010; 329:533–8. Dumitriu A, Golji J, Labadorf AT, Gao B, Beach TG, Myers RH, Longo KA, Latourelle JC. Integrative analyses of proteomics and RNA transcriptomics implicate mitochondrial processes, protein folding pathways and GWAS loci in Parkinson disease. BMC Med Genomics. 2016; 9:5. Munsky B, Neuert G, van Oudenaarden A. Using gene expression noise to understand gene regulation. Science. 2012; 336:183–7. Espinosa Angarica V, Del Sol A. Modeling heterogeneity in the pluripotent state: A promising strategy for improving the efficiency and fidelity of stem cell differentiation. Bioessays. 2016; 38:758–68. Waldron D. Gene expression: Environmental noise control. Nat Rev Genet. 2015; 16:624–5. Junker JP, van Oudenaarden A. Every cell is special: genome-wide studies add a new dimension to single-cell biology. Cell. 2014; 157:8–11. Deng Q, Ramskold D, Reinius B, Sandberg R. Single-cell RNA-seq reveals dynamic, random monoallelic gene expression in mammalian cells. Science. 2014; 343:193–6. Wei W, Shin YS, Ma C, Wang J, Elitas M, Fan R, Heath JR. Microchip platforms for multiplex single-cell functional proteomics with applications to immunology and cancer research. Genome Med. 2013; 5:75. Buganim Y, Faddah D, Cheng A, Itskovich E, Markoulaki S, Ganz K, Klemm S, Vanoudenaarden A, Jaenisch R. Single-cell expression analyses during cellular reprogramming reveal an early stochastic and a late hierarchic phase. Cell. 2012; 150(6):1209–22. Stegle O, Teichmann SA, Marioni JC. Computational and analytical challenges in single-cell transcriptomics. Nat Rev Genet. 2015; 16:133–45. Poirion OB, Zhu X, Ching T, Garmire L. Single-Cell Transcriptomics Bioinformatics and Computational Challenges. Front Genet. 2016; 7:163. Woodhouse S, Moignard V, Göttgens B, Fisher J. Processing, visualising and reconstructing network models from single-cell data. Immunol Cell Biol. 2015; 94:256–65. Marr C, Zhou JX, Huang S. Single-cell gene expression profiling and cell state dynamics: collecting data, correlating data points and connecting the dots. Curr Opin Biotechnol. 2016; 39:207–14. Moignard V, Macaulay IC, Swiers G, Buettner F, Schütte J, Calero-Nieto FJ, et al. Characterization of transcriptional networks in blood stem and progenitor cells using high-throughput single-cell gene expression analysis. Nat Cell Biol. 2013; 15:363–72. Bendall SC, Davis KL, Amir el-AD, Tadmor MD, Simonds EF, Chen TJ, Shenfeld DK, Nolan GP, Pe'er D. Single-cell trajectory detection uncovers progression and regulatory coordination in human b cell development. Cell. 2014; 157(3):714–25. Penfold CA, Wild DL. How to infer gene networks from expression profiles, revisited. Interface Focus. 2011; 1:857–70. Bar-Joseph Z, Gitter A, Simon I. Studying and modelling dynamic biological processes using time-series gene expression data. Nat Rev Genet. 2012; 13:552–64. Maetschke SR, Madhamshettiwar PB, Davis MJ, Ragan MA. Supervised, semi-supervised and unsupervised inference of gene regulatory networks. Brief Bioinform. 2012; 15:195–211. Wang J, Cheung LW, Delabie J. New probabilistic graphical models for genetic regulatory networks studies. J Biomed Inform. 2005; 38:443–55. Sachs K, Perez O, Pe'Er D, Lauffenburger DA, Nolan GP. Causal protein-signaling networks derived from multiparameter single-cell data. Science. 2005; 308:523–9. Brown MPS, Grundy WN, Lin D, Cristianini N, Sugnet CW, Furey TS, Ares M, Haussler D. Knowledge-based analysis of microarray gene expression data by using support vector machines. Proc Natl Acad Sci. 2000; 97:262–7. Kim Y, Han S, Choi S, Hwang D. Inference of dynamic networks using time-course data. Brief Bioinformatics. 2014; 15:212–28. Chasman D, Siahpirani AF, Roy S. Network-based approaches for analysis of complex biological systems. Curr Opin Biotechnol. 2016; 39:157–66. Wang J, Tian T. Quantitative model for inferring dynamic regulation of the tumour suppressor gene p53. BMC Bioinformatics. 2010; 11(1):36. Ocone A, Sanguinetti G. Reconstructing transcription factor activities in hierarchical transcription network motifs. Bioinformatics. 2011; 27:2873–9. Maraziotis IA, Dragomir A, Thanos D. Gene regulatory networks modelling using a dynamic evolutionary hybrid. BMC Bioinformatics. 2010; 11(1):1. Petralia F, Wang P, Yang J, Tu Z. Integrative random forest for gene regulatory network inference. Bioinformatics. 2015; 31(12):i197–i205. Marbach D, Costello JC, Küffner R, Vega NM, Prill RJ, Camacho DM, Allison KR, Kellis M, Collins JJ, Stolovitzky G. Wisdom of crowds for robust gene network inference. Nat Methods. 2012; 9:796–804. Kim D, Kang M, Biswas A, Liu C, Gao J. Integrative approach for inference of gene regulatory networks using lasso-based random featuring and application to psychiatric disorders. BMC Med Genomics. 2016; 9(Suppl 2):50. Wang J, Wu Q, Hu X, Tian T. An integrated approach to infer dynamic protein-gene interactions – a case study of the human p53 protein. Methods. 2016; 110:3–13. Ocone A, Haghverdi L, Mueller NS, Theis FJ. Reconstructing gene regulatory dynamics from high-dimensional single-cell snapshot data. Bioinformatics. 2015; 31:i89–i96. Krishnaswamy S, Spitzer MH, Mingueneau M, Bendall SC, Litvin O, Stone E, Pe'er D, Nolan GP. Systems biology. Conditional density-based analysis of T cell signaling in single-cell data. Science. 2014; 346:1250689. Ji Z, Ji H. Tscan: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis. Nucleic Acids Res. 2016; 44:e177. Pina C, Teles J, Fugazza C, May G, Wang D, Guo Y, Soneji S, Brown J, Edén P, Ohlsson M. Single-cell network analysis identifies ddit3 as a nodal lineage regulator in hematopoiesis. Cell Reports. 2015; 24:1503–10. Trapnell C, Cacchiarelli D, Grimsby J, Pokharel P, Li S, Morse M, Lennon NJ, Livak KJ, Mikk elsen TS, Rinn JL. Pseudo-temporal ordering of individual cells reveals dynamics and regulators of cell fate decisions. Nat Biotechnol. 2014; 32:381. Haghverdi L, Buettner F, Theis FJ. Diffusion maps for high-dimensional single-cell analysis of differentiation data. Bioinformatics. 2015; 31(18):2989–98. Moignard V, Woodhouse S, Haghverdi L, Lilly AJ, Tanaka Y, Wilkinson AC, Buettner F, Macaulay IC, Jawaid W, Diamanti E, et al.Decoding the regulatory network of early blood development from single-cell gene expression measurements. Nat biotechnol. 2015; 33:269–76. Williams CKI, Rasmussen CE. Gaussian processes for machine learning. Cambridge: MIT Press; 2006, pp. 7–32. Irrthum A, Wehenkel L, Geurts P, et al.Inferring regulatory networks from expression data using tree-based methods. PloS ONE. 2010; 5(9):e12776. Huynh-Thu VA, Irrthum A, Wehenkel L, Geurts P. Inferring regulatory networks from expression data using tree-based methods. Plos ONE. 2009; 5(9):e12776. Shea MA, Ackers GK. The OR control system of bacteriophage lambda. A physical-chemical model for gene regulation. J Mol Biol. 1985; 181(2):211–30. Turner BM, Van Zandt T. A tutorial on approximate Bayesian computation. J Math Psych. 2012; 56:69–85. Wu Q, Smith-Miles K, Tian T. Approximate bayesian computation schemes for parameter inference of discrete stochastic models using simulated likelihood density. BMC Bioinfomatics. 2014; 15(S12):S3. Kitano H. Towards a theory of biological robustness. Mol Syst Biol. 2007; 3:137. Tian T, Smith-Miles K. Mathematical modeling of gata-switching for regulating the differentiation of hematopoietic stem cell. BMC Systems Biol. 2014; 8(S1):S8. This research work is supported by the National Natural Science Foundation of China (11571368), and Australian Research Council Discovery Project (DP120104460). T.T. is supported by the Australian Research Council (ARC) Discovery Projects (DP120104460) which supports the publication cost of this paper. About this supplement This article has been published as part of BMC Medical Genomics Volume 10 Supplement 5, 2017: Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2016: medical genomics. The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-10-supplement-5. TT conceived the research. JW and TT conducted the research. JW, XH, XZ and TT interpreted the results and wrote the paper. All authors edited and approved the final version of the manuscript. Consent to publication School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan, 430073, China Jiangyong Wei School of Computer, Central China Normal University, Wuhan, 430079, China Xiaohua Hu School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, China Xiufen Zou School of Mathematical Sciences, Monash University, Melbourne, VIC 3800, Australia Tianhai Tian Search for Jiangyong Wei in: Search for Xiaohua Hu in: Search for Xiufen Zou in: Search for Tianhai Tian in: Correspondence to Tianhai Tian. Additional file 1 Figure S1. Extended gene network with 17 regulations. (DOCX 76 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Wei, J., Hu, X., Zou, X. et al. Reverse-engineering of gene networks for regulating early blood development from single-cell measurements. BMC Med Genomics 10, 72 (2017) doi:10.1186/s12920-017-0312-z Genetic regulatory network Blood stem cell Single-cell experiment Graphic model Dynamic model
CommonCrawl
Clarita Saldarriaga Vargas1,2, Matthias Bauwens3, Ivo N. A. Pooters3, Stefaan Pommé4, Steffie M. B. Peters5, Marcel Segbers6, Walter Jentzen7, Andreas Vogg8, Floris H. P. van Velden9, Sebastiaan L. Meyer Viol10, Martin Gotthardt5, Felix M. Mottaghy3,8, Joachim E. Wildberger3, Peter Covens2 & Roel Wierts ORCID: orcid.org/0000-0002-9955-87543 Personalized molecular radiotherapy based on theragnostics requires accurate quantification of the amount of radiopharmaceutical activity administered to patients both in diagnostic and therapeutic applications. This international multi-center study aims to investigate the clinical measurement accuracy of radionuclide calibrators for 7 radionuclides used in theragnostics: 99mTc, 111In, 123I, 124I, 131I, 177Lu, and 90Y. In total, 32 radionuclide calibrators from 8 hospitals located in the Netherlands, Belgium, and Germany were tested. For each radionuclide, a set of four samples comprising two clinical containers (10-mL glass vial and 3-mL syringe) with two filling volumes were measured. The reference value of each sample was determined by two certified radioactivity calibration centers (SCK CEN and JRC) using two secondary standard ionization chambers. The deviation in measured activity with respect to the reference value was determined for each radionuclide and each measurement geometry. In addition, the combined systematic deviation of activity measurements in a theragnostic setting was evaluated for 5 clinically relevant theragnostic pairs: 131I/123I, 131I/124I, 177Lu/111In, 90Y/99mTc, and 90Y/111In. For 99mTc, 131I, and 177Lu, a small minority of measurements were not within ± 5% range from the reference activity (percentage of measurements not within range: 99mTc, 6%; 131I, 14%; 177Lu, 24%) and almost none were outside ± 10% range. However, for 111In, 123I, 124I, and 90Y, more than half of all measurements were not accurate within ± 5% range (111In, 51%; 123I, 83%; 124I, 63%; 90Y, 61%) and not all were within ± 10% margin (111In, 22%; 123I, 35%; 124I, 15%; 90Y, 25%). A large variability in measurement accuracy was observed between radionuclide calibrator systems, type of sample container (vial vs syringe), and source-geometry calibration/correction settings used. Consequently, we observed large combined deviations (percentage deviation > ± 10%) for the investigated theragnostic pairs, in particular for 90Y/111In, 131I/123I, and 90Y/99mTc. Our study shows that substantial over- or underestimation of therapeutic patient doses is likely to occur in a theragnostic setting due to errors in the assessment of radioactivity with radionuclide calibrators. These findings underline the importance of thorough validation of radionuclide calibrator systems for each clinically relevant radionuclide and sample geometry. In the last decades, the application of personalized molecular radiotherapy using theragnostics has gained a lot of interest in nuclear medicine [1, 2]. Theragnostic approaches aim to optimize molecular radiotherapy for individual patients using pre-therapeutic diagnostic imaging. In particular, assessment of the therapeutic absorbed dose to malignant tissue and to organs at risk based on these images facilitates a personalized therapeutic activity approach. These approaches require accurate quantification of the activity administered to patients both in diagnostic and therapeutic applications. Accurate activity calibration of radionuclide imaging equipment such as SPECT and PET cameras is also essential in theragnostics, to enable an accurate estimation of radiopharmaceutical uptake in patient tissues. In practice, radionuclide activity calibrators are used to measure the radiopharmaceutical activity to be administered to patients and are often the reference instrument for calibrating SPECT and PET systems. Radionuclide calibrators are typically provided with factory-set calibration factors for a variety of clinically relevant radionuclides. Usually, the calibration factors are calculated from energy-dependent sensitivity curves, determined experimentally on a dedicated reference device using well-calibrated traceable sources in standard containers [3]. In-factory calibration of medical devices is usually limited to a small subset of (long-lived) radionuclides to ensure proper response of each device with respect to the reference device. However, due to manufacturing tolerances in device specifications, variations in response among radionuclide calibrators of same model can occur, particularly in the low photon-energy range, which is generally not tested in the factory. Moreover, sample geometries used in clinical practice differ in shape, size, material, and filling volume from the standard container geometries used for activity calibrations. Since radionuclide calibrator measurements are sensitive to changes in system and sample measurement geometry [4, 5], the validity of generic factory-set calibration factors is not guaranteed for clinically used radionuclides and sample geometries. Therefore, several international guidelines recommend a thorough validation of radionuclide calibrator accuracy for all clinically used radionuclides and sample geometries during acceptance testing [6,7,8]. These guidelines typically recommend a measurement accuracy of ± 5–10% for diagnostic and ± 5% for therapeutic radionuclides. However, although practice varies widely across Europe, more often than not radionuclide calibrators are clinically implemented without such validation due to a lack of available certified activity standards of (short-lived) clinically used radionuclides, expertise, and time/costs required to perform this validation. In fact, a multi-center study investigating the radionuclide calibrator measurement accuracy among 15 Belgian hospitals performed between 2013 and 2015 revealed that none of the participating centers assessed the accuracy of clinically used radionuclides [9]. Several studies [9,10,11,12,13,14,15] have reported on the measurement accuracy of various individual diagnostic and therapeutic radionuclides, and demonstrated large measurement deviations (> ± 10%), particularly for 111In, 68Ga, 123I, and 90Y. However, no study has reported on the combined error of radiopharmaceutical activity measurements with radionuclide calibrators in the increasing application of personalized molecular radiotherapy based on a theragnostic approach. Therefore, we performed an international multi-center study on the clinical measurement accuracy of 32 radionuclide calibrators (7 different types from 4 vendors) for a comprehensive set of theragnostic radionuclides: imaging tracers 99mTc, 111In, 123I, and 124I, and their therapeutic companions 90Y, 177Lu, and 131I. Additionally, the combined deviation of activity measurements in a theragnostic setting was evaluated for 5 clinically relevant theragnostic pairs: 131I/123I and 131I/124I, which are used mostly for treatment of thyroid disorders such as differentiated thyroid cancer and hyperthyroidism; 177Lu/111In and 90Y/111In, used for peptide receptor radionuclide therapy of neuroendocrine neoplasms and prostate cancer; and 90Y/99mTc, used in the treatment of liver tumors and metastases with 90Y microspheres [1, 2, 16]. Stock solution preparation The radionuclides were obtained from various suppliers: [99mTc]-NaTcO4, [123I]-NaI, and [131I]-NaI from GE Healthcare (Eindhoven, The Netherlands); [124I]-NaI from BV Cyclotron VU (Amsterdam, The Netherlands); [177Lu]-LuCl3 from IDB Holland (Baarle-Nassau, The Netherlands); and [111In]-InCl3 and [90Y]-YCl3 from Curium (Petten, The Netherlands). [177Lu]-LuCl3, [111In]-InCl3, [90Y]-YCl3, [131I]-NaI, and [124I]-NaI stock solutions and samples were prepared within 24 h of the first day of the intercomparison measurements, which took place over three consecutive days. Due to their shorter half-life, [99mTc]-NaTcO4 and [123I]-NaI solutions were prepared at each measurement day. For each radionuclide, a stock solution was prepared with approximately 10 MBq mL−1 on the first measurement day. Stock solutions were prepared using sterile water (Baxter, The Netherlands) in a borosilicate glass container and immediately after preparation dispensed into samples to avoid precipitations. Evaluation of radionuclidic impurities Each stock solution was checked for radionuclidic impurities by high-resolution gamma-ray spectrometry using a high-purity germanium detector (GR1018; Mirion Technologies, Georgia, USA) as described in the supplemental material. No short- or long-lived radionuclidic impurities were found for 99mTc, 111In, 131I, and 90Y. For 123I and 124I, trace amounts of 125I were observed with a maximum radionuclidic impurity of 0.030% and 0.037%, respectively. For 177Lu, trace amounts of 177mLu were observed with a maximum radionuclidic impurity of 0.017%. Minimum detectable activities of potential impurities not detected (99Mo, 114mIn, 121Te, 88Y) and the effect of (potential) impurities on a radionuclide calibrator are reported in the supplemental material (Table S1) [17]. Determination of reference activity The reference (true) activity concentration of each stock solution was determined by the Belgian Nuclear Research Centre (SCK CEN) (Mol, Belgium) in collaboration with the Joint Research Centre (Geel, Belgium), which is specialized in primary and secondary standardization of radioactivity [18]. Reference activity measurements were performed using two secondary standard ionization chambers: a Fidelis (Southern-Scientific, Henfield, UK) and an ISOCAL-III (Vinten Instruments, UK). The latter is consistent with radioactivity standards from the JRC [9]. Both chambers are of the same design and use calibration factors traceable to the primary standards of activity of the UK National Physical Laboratory (NPL). From each stock solution, three 10-mL type 1+ Schott vials (SCHOTT AG Pharmaceutical Systems, Mainz, Germany) [19] were filled with 4 mL of solution (calibration geometry specified for the Fidelis), and their activities were assayed in both reference chambers. With the exception of 90Y, the reference activity of each Schott vial was determined from the mean of the activities measured with both the Fidelis and the ISOCAL, and the gravimetrically determined mass of stock solution in the vial. All activity measurements were corrected for background signal and for radioactive decay to a common reference time using the half-life values published in the NuDat database version 2.8 [20]. Additionally, before determination of the average value, the activity measurements were corrected for linearity, radionuclide impurities (significant only for (177mLu/)177Lu measurements), and deviations in response against the NPL master chamber (see supplementary Table S2). For the latter correction, radionuclide- and chamber-dependent correction factors were estimated from the NPL acceptance testing data of each system (corrections < 1.1% for the gamma emitters and 14.5% for the 90Y Fidelis measurements), as described in the supplementary data [21]. With the exception of 90Y, the Fidelis and ISOCAL systems agreed within ± 0.7% in Schott vial activity measurements. For 90Y, however, a difference in response of approximately 10% was observed between both systems. On the basis of this discrepancy and the lack of experimental data to correct the response of the ISOCAL against the NPL master chamber for pure beta emitters, the reference activity concentration of the 90Y stock solution was derived from activity measurements with the Fidelis only. The reference activity concentration of the radionuclide stock solution was then determined as the mean of the activity concentrations from the three Schott vials. The expanded uncertainty (95% confidence level) in the reference activity concentrations of the stock solutions was 2.0% for 99mTc, 1.7% for 111In, 2.2% for 123I, 2.0% for 124I, 1.1% for 131I, 1.2% for 177Lu, and 6.9% for 90Y (see supplementary Table S3). From each stock solution, a set of four samples comprising two different clinical containers each with two filling volumes were prepared: two 3-mL Luer-lock syringes (Terumo Europe, Leuven, Belgium) filled with 1 mL and 3 mL of solution, and two 11-mL TechneVial glass vials (Curium, Petten, The Netherlands) filled with 1 mL and 10 mL of solution. Each syringe was sealed with a combi-stopper (Braun, The Netherlands). The content mass of each sample was verified gravimetrically, by weighing the sample before and after filling with an analytical balance (XS105DU/M; Mettler-Toledo, Tiel, The Netherlands). The reference activity (Aref) of each sample was calculated by multiplying the content mass with the stock solution reference activity concentration. As the uncertainty in sample mass measurements was negligible compared to the uncertainty in radioactivity concentration, the relative uncertainty of the sample reference activity (uref) was approximately equal to the relative uncertainty of the stock solution activity concentration. Due to transport logistics, for one hospital, separate sets of samples (a 3-mL syringe filled with 3 mL and a TechneVial filled with 10 mL of stock solution) were prepared for all radionuclides. Clinical activity measurements Sample measurements were performed on a total of 32 radionuclide calibrator systems of 8 university hospitals located in the Netherlands, Belgium, and Germany. Of all systems, 4 were manufactured by Capintec Inc (Florham Park, USA), 11 by former MED Nuklear-Medizintechnik (now Nuvia Instruments, Dresden, Germany), 1 by PTW-Freiburg (Freiburg, Germany), and 16 by former Veenstra Instruments (now Comecer Netherlands, Joure, The Netherlands) (see supplemental Table S4). If applicable, measurements were performed using hospital-specific calibration settings and sample geometry corrections. Otherwise, standard factory settings were used (see supplementary Tables S5-S11). The standard (automatic) measurement (averaging) time of the calibrator was used. Three activity readings (n = 3) were taken sequentially, without moving the sample, at intervals of several seconds (dependent on observed system response time). The calibrator reading was left to settle (typically for about 15 to 30 s) before the first reading was taken. The range of the sample activities at the moment of clinical measurements is indicated in Table 1. Each measurement was corrected for background signal and radioactive decay. For each measurement triplet, the average net activity (Ām) and standard deviation (SD) were calculated. The statistical measurement uncertainty (um) was estimated at the 95% confidence level (coverage factor k = 4.30 for a t-distribution with two (n − 1) degrees of freedom), as follows: $$ {u}_m=\frac{k\bullet SD}{{\overline{A}}_m\bullet \sqrt{n}} $$ Table 1 Sample reference activities (minimum–maximum (25th percentile)) at the moment of clinical activity measurements Net activities were not corrected for the presence of radionuclidic impurities (if any). Individual radionuclides The radionuclide calibrator measurement accuracy was determined as the percentage deviation of the average measured activity Ām with respect to the sample reference activity Aref. For each radionuclide and sample geometry, the typical accuracy and reliability of activity measurements were described in terms of the median and the inter-quartile range (IQR) values of the measurement percentage deviations of all systems pooled together. Similarly, these metrics were used to assess the manufacturer dependence of measurement accuracy and inter-system variability. Sample geometry effects were evaluated by comparing the measurement deviations of the syringe and vial samples with similar filling volume (syringe 1 mL vs vial 1 mL, syringe 3 mL vs vial 10 mL). Theragnostic pairs Finally, since patient tissue doses are proportional to the amount of therapeutic activity administered and in a theragnostic approach the amount of therapeutic activity is based on diagnostic imaging, the combined systematic percentage deviation (bias) that would be associated to therapeutic doses (ED) was calculated for the theragnostic pairs 131I/123I, 131I/124I, 177Lu/111In, 90Y/99mTc, and 90Y/111In, as follows: $$ {E}_D=\left[\frac{{\left({\overline{A}}_m/{A}_{\mathrm{ref}}\right)}_{\mathrm{therapy}}}{{\left({\overline{A}}_m/{A}_{\mathrm{ref}}\right)}_{\mathrm{imaging}}}-1\right]\bullet 100\% $$ In total, 32 radionuclide calibrator systems were investigated. If no calibration setting was available for a specific radionuclide (see supplemental Tables S4-S10), that radionuclide was not measured on that system. One system (E1) appeared defective as it systematically underestimated the activity (typically by more than 10%) of all samples (see Fig. 1). Therefore, this system was excluded from further analysis. This resulted in a total of 745 activity measurement datasets for further analysis. Box-whisker plots and mean values of the percentage deviations of all activity measurements used for analysis, for each radionuclide sample configuration tested (90Y whisker limits not shown: syringe 1 mL 423.9%, syringe 3 mL -383.6%). Additionally, the percentage deviations from defective measurements excluded from the analysis and box-whisker plots are shown as data points An overview of the intercomparison results is provided in Fig. 1 as box-whisker plots of the percentage deviations from all analyzed radionuclide calibrator measurements. Figures 2 and 3 show the individual percentage deviations grouped per manufacturer (excluding defective/invalid measurements), for the diagnostic and therapeutic radionuclides, respectively. Table 2 indicates the percentage of activity measurements that exceeded a given range of deviation from the reference activity. Percentage deviations of all the activity measurements used for analysis, for each system tested, for the diagnostic radionuclides. a 99mTc, b 111In, c 123I, d 124I. Systems using sample geometry calibration/correction factors are labeled with an asterisk (*) Percentage deviations of all the activity measurements used for analysis, for each system tested, for the therapeutic radionuclides. a 131I, b 177Lu, c 90Y (data not shown: D1 syringe 1 mL 158.2%, D1 syringe 3 mL 94.0%, H2 syringe 1 mL 423.9%, H2 syringe 3 mL 383.6%). Systems using sample geometry calibration/correction factors are labeled with an asterisk (*) Table 2 Percentage of activity measurements that exceed a given deviation from the reference activity Diagnostic radionuclides 99mTc For 99mTc, only 6% (7/110) of all measurements were not within ± 5% of the reference value. No dataset showed deviations larger than ± 10%. For all sample configurations, the median deviation was within 3.2% from the reference value and there was little spread in measurement deviations (largest IQR 4%), indicating a good and reproducible measurement accuracy for 99mTc. With a median difference of less than ± 2% in measurement deviations between syringes and vials (IQR 3%), the dependency on container type was mostly small. A substantial amount of the 111In measurements did not meet the recommended accuracy of ± 5% (51%; 53/104), nor the less strict limit of ± 10% (22%; 23/104). Although the median deviation of all systems was within 3.5% from the reference value for all sample types, the IQR ranged up to 12%. Additionally, the measurement accuracy often depended on sample container, with a median difference between syringes and vials of ± 8% (IQR 14%). Typically, this was most pronounced for systems that did not incorporate any correction for measurement geometry (i.e., Capintec systems, D3, E3, E4, G1–G3). However, even systems with sample geometry calibration/correction settings were not always accurate within ± 5% or ± 10% (Isomed F3–F6). The majority of the 123I measurements did not meet the recommended ± 5% accuracy limit (83%; 88/106). Moreover, a substantial amount of measurements did not meet the ± 10% limit either (35%; 37/106). For all the samples, the median deviation of all systems was within 7.4% from the reference value, and the largest IQR was 30%. Furthermore, we observed a large dependence on sample type with a median difference between syringes and vials of ± 17% (IQR 16%). Typically, systems without sample geometry corrections tended to overestimate the activity in syringes but underestimate the activity in vials, whereas the opposite trend was observed for systems that did incorporate sample geometry corrections. A substantial amount of the 124I measurements did not meet the recommended ± 5% (63%; 59/94) nor the less strict limit of ± 10% (15%; 14/94). For all the samples, the median deviation of all systems was within 4.9% from the reference value, and the largest IQR was 16%. Additionally, with a median difference between syringes and vials of ± 10% (IQR 8%), 124I showed a substantial sensitivity to sample geometry. Syringe measurements showed a rather small overestimation in measured activity (largest median deviation of 4.8%) with a relatively small IQR (maximum 6%). For vials, however, the accuracy typically depended on whether the system used sample-specific calibration/correction settings (median deviation of all vial measurements of 9.1%) or not (− 6.3%). Therapeutic radionuclides For 131I, 14% (16/111) and 3% (3/111) of all activity measurements were not within ± 5% and ± 10% of the reference values, respectively. For all the samples, the median deviation of all systems was within 1.1% from the reference value, and the largest IQR was 7%. Furthermore, with a median difference of less than ± 2% between the deviations of syringes and vials (IQR 3%), sample geometry effects were mostly small. 177Lu A substantial amount of all 177Lu measurements did not meet the recommended ± 5% (24%; 26/110) criterion. However, no dataset showed deviations exceeding the ± 10% limit. For all the samples, the median deviation was within 3.7% from the reference value. All IQR values were within 4%, indicating a fair to good reproducible measurement accuracy. Moreover, with a median difference of approximately ± 1% between the deviations of syringes and vials (IQR 2%), sample geometry effects were small. The majority of the 90Y measurements did not meet the recommended ± 5% accuracy limit (61%; 67/110). Moreover, a substantial amount of measurements did not meet the ± 10% limit (26%; 28/110). We observed a large variability in measurement accuracy depending on the system (type) and manufacturer. Isomed systems, using specific calibration settings for each sample configuration, often showed very large underestimation (> 30%) of the 90Y reference activity, most pronounced for syringes, with IQR values up to 45%. Additionally, we found a large variability in performance between systems of the same type using identical calibration factors (e.g., A1 vs F1). Moreover, with a median difference between the deviations for syringes and vials of ± 33% (IQR 30%), geometry effects were very large. Instead, the other radionuclide systems typically performed better, particularly for vials. For all sample configurations, the mean deviations were within 3.5%, and the largest IQR was 12%. With a median difference in measurement deviations between syringes and vials of ± 6% (IQR 8%), geometry effects were much smaller compared to the Isomed systems. Interestingly, two systems resulted in unexpectedly high deviations from the reference activity: Isomed D1 (maximum deviation 158%) and Veenstra H2 (maximum deviation 424%). Figure 4 shows the combined systematic percentage deviations for the theragnostic pairs considered (131I/123I, 131I/124I, 177Lu/111In, 90Y/99mTc, 90Y/111In), when both radionuclides are measured on the same device with the same sample geometry. Percentage combined deviations for the theragnostic radionuclide pairs considered, when both radionuclides are measured on the same device and using the same sample geometry. a 131I/123I, b 131I/124I, c 177Lu/111In, d 90Y/111In, e 90Y/99mTc The combined deviations of the theragnostic pairs show substantial variability in measurement accuracy between systems and manufacturers with a dependency on calibration/correction setting and sample geometry. Generally speaking, roughly half of all investigated theragnostic combinations would introduce a bias in the therapeutic dose larger than ± 5%, and for one quarter of these combinations in a bias larger than ± 10% (Table 3). This performance is even worse when activity measurements in different containers are combined: of all administrations, two thirds would introduce a bias larger than ± 5% and one third larger than ± 10% (data not shown). Table 3 Percentage of theragnostic activity measurements that exceed a given deviation from the reference activities Administering the correct amount of therapeutic activity to patients is of utmost importance in personalized molecular radiotherapy. Typically, (inter)national guidelines recommend stricter accuracy demands (± 5%) for therapeutic than for diagnostic radionuclides (± 5–10%) [6,7,8]. However, in case of theragnostics, where the therapeutic activity is optimized based on pre-therapeutic dosimetry/uptake calculations using diagnostic imaging, accurate quantification of the diagnostic activity is of equal importance as accurate therapeutic activity quantification. Therefore, to prevent introducing a substantial error in the therapeutic doses delivered to patients, we advocate to apply the ± 5% accuracy limit also for diagnostic radionuclides in a theragnostic setting. In our study, we found one radionuclide calibrator (E1) that showed large deviations (> 10% underestimations) for all radionuclides, therefore appearing to be malfunctioning. This system was recently installed and was not yet (fully) validated nor released for clinical use. These observations indicate that extensive validation of all clinically used radionuclides is of vital importance. This intercomparison shows that radionuclide calibrator measurements of 99mTc, still the workhorse of nuclear medicine, are (nearly) always correct, in agreement with values reported in literature [9, 14]. The same cannot be said for the other diagnostic radionuclides evaluated. For 111In, 123I, and 124I, measurement deviations frequently exceeded the ± 5% and often even the ± 10% limits. This is in agreement with values reported in literature for 111In and 123I [9, 10, 12]. To the best of our knowledge, no multi-center data are available on the typical accuracy of 124I clinical activity measurements. In particular, these radionuclides (111In, 123I, and 124I) show a large dependence on sample geometry (particularly sample container) caused by self-absorption of the emitted low-energy X-rays within the sample itself. Consequently, accurate activity measurement of these radionuclides requires specific calibration or correction factors for the sample geometry [22, 23]. When factory settings dedicated to specific sample configurations are available, they must be experimentally verified prior to clinical use, as they might not be accurate for the specific containers used locally. This was the case for many activity measurements of 123I, 111In, and 124I. Alternatively, selective absorption of low-energy X-rays using a copper/aluminum filter is an effective method to minimize the variability in activity measurements caused by sample geometry [23, 24]. In this intercomparison, a copper filter was available for two systems, but appropriate calibration factors for measurements with filter had yet to be determined. Regarding the therapeutic radionuclides, 177Lu measurements were almost always within ± 5% from the reference activity, and never deviated by more than ± 10%, in agreement with values previously reported for Capintec systems [13]. A tendency to overestimate the reference activity values by typically a few percent was observed, which might (partially) be attributed to the calibrators being sensitive to the presence of the 177mLu impurity. Our study presents new data for 177Lu, particularly on the accuracy of medical calibrators from different suppliers, and using clinical sample configurations. Similar as for 177Lu, the majority of calibrators were accurate for measuring 131I albeit with a slightly higher deviation (sometimes > ± 5%, rarely > ± 10%). This is in agreement with values reported in literature [15]. In contrast, for 90Y, some systems showed incorrect measurements to an unacceptable level: the deviation ranged from a 72% underestimation to a 424% overestimation. Indeed, in literature, large measurement errors up to ± 50% have been reported [13]. In particular, although all Isomed devices used factory-set corrections for sample geometry, they were highly sensitive to the sample container and volume of solution and large measurement deviations were observed. Also, two systems (D1 and H2) showed extremely high overestimations for the syringe measurements, but not for the vials. Interestingly, this effect was not observed for other systems of the same type and with the same (factory-set) calibration factors. Most likely, in these two systems, high-energy beta radiation was able to reach the ionization chamber in the syringe samples but not in the vial samples. Indeed, the radionuclide calibrator response to high-energy beta particles is highly sensitive to even small variations in the material and design specifications of the measurement set-up [4]. This clearly indicates the importance of extensive validation of each individual system for each radionuclide and clinically used sample geometry. Theragnostic applications The present study sets the first reference on typical combined errors associated to clinical radiopharmaceutical activity measurements in a theragnostic setting. Considering 5 clinically relevant theragnostic pairs (131I/123I, 131I/124I, 177Lu/111In, 90Y/99mTc, 90Y/111In), this intercomparison study showed that poor accuracy in radionuclide calibrator activity measurements of therapeutic and diagnostic radionuclides can introduce a relatively large (> ± 10%) bias in the therapeutic doses delivered to patients in theragnostic applications. Such errors should be minimized as much as practically possible, therefore the recommendation to apply a standard ± 5% accuracy limit to calibrator activity measurements of both therapeutic and diagnostic radionuclides. The best way to limit the error in the administration of activity is to ensure accurate and reproducible activity measurements of both radionuclides involved in the theragnostic application. This can be achieved by proper evaluation of the accuracy of the measurement settings of the calibrators for the radionuclides and sample configurations found in clinical practice, together with an assessment of other sources of uncertainty in the activity measurements and proper maintenance through a quality assurance program [6]. These procedures may lead to re-calibration of the device or determination of appropriate correction factors, and optimization of the source configurations (e.g., choice of container) or other measurement settings or procedures used for activity measurements. After all, the error in the assessment of patient administered activities is only one of the several sources of uncertainty in the dosimetry process [25]. Minimizing its contribution to the overall uncertainty is the best starting point towards patient treatment optimization in molecular radiotherapy. Uncertainties in the clinical activity measurements of this study As reported in detail by Gadd et al. [5], radioactivity measurements using radionuclide calibrators are affected by different sources of uncertainty, including the accuracy of calibration factors, sample geometry effects, photon-emitting radionuclide impurities, background variability, system non-linear response, short-term response variability, reproducibility of sample position, and influence of external shielding. These uncertainty components are dependent on the specific measurement set-up (calibrator unit and its accessories, shielding, local background field), the radionuclide, and/or the level of activity (ionization current) being measured. In this study, the clinical measurement accuracy of radionuclide calibrators was tested for 7 radionuclides used in theragnostics, each in 4 sample configurations. The effect of the sample type of container (syringe vs vial) was evaluated. As previously addressed, this effect was a significant source of variability in the activity measurements of all the radionuclides, with the magnitude of the effect (median) being large (> ± 5%) for 90Y, 123I, 111In, and 124I; mostly small (± 2%) for 131I and 99mTc; and small (± 1%) for 177Lu. The influence of the short-term response variability in the activity measurements was reduced by taking the average of three consecutive activity readings. Although the measurement statistical uncertainty um was within 0.7% for the large majority (> 75%) of the activity datasets, which indicates a good short-term measurement reproducibility, it is not negligible and in a clinical setting (where an average value is generally not estimated) would cause a spread in the activity assessment. The background reading was subtracted from all activity measurements. Yet, the uncertainty due to background variability was not assessed. This uncertainty can have an important bearing in the measurement of low activities and radionuclides with a low response per unit activity, such as 90Y. In this study, the highest background-to-sample reading ratios were obtained, as expected, with the vials with 1 mL (samples with low activity), and were ≤ 3.7% for 90Y, 1.7% for 177Lu, and 0.9% for the other radionuclides. For the vials filled with 10 mL (samples with the highest activity), background fractions were considerably lower (less or equal to 0.6% for 90Y and 0.2% for the other radionuclides). Assuming a high uncertainty of 10% in the background measurement, the potential error introduced in the estimated net activities of the low-activity vial samples of this study would be ≤ ± 0.38% (90Y), ± 0.17% (177Lu), and ± 0.09% (other radionuclides). Although such potential error is not negligible for 90Y and 177Lu, it is much lower compared to the measurement deviations observed in this intercomparison for the vial and syringe samples with 1–3 mL, suggesting that it is not the main cause of the spread in 90Y and 177Lu measurements of the samples with the lowest activities. For the other samples and radionuclides, the potential error from the background uncertainty is negligible. All radionuclide solutions were checked for the presence of photon-emitting impurities by high-resolution gamma spectrometry. Impurities were detected only in 123I (125I), 124I (125I), and 177Lu (177mLu). From these impurities, only the 177mLu impurity has a significant effect on activity measurements in a radionuclide calibrator (0.51% overresponse for the Fidelis). Since the activities measured with the hospital calibrators were not corrected for this effect, this remains a source of uncertainty in the 177Lu intercomparison results. Information regarding other sources of uncertainty was not gathered from the participating hospitals. Yet, hospitals were encouraged to make a more detailed uncertainty assessment for their activity measurements, since this is essential to evaluate the agreement with the reference values and determine which corrective actions are needed to improve the accuracy and reliability of their activity measurements. In general, that assessment should be within the practical reach of hospitals, since most of the sources of error mentioned above can be quantified by following a thorough quality control program [5, 6, 8]. Study limitations It should be noted that not all the calibrator systems tested were clinically used to measure all the radionuclides considered in this study. Since hospitals may validate a device only for the specific radionuclides used in their clinical practice, some specific results of this study may not fully represent the local (hospital) measurement capability. Clinical activity measurements can bear additional uncertainties beyond those accounted in this study. The amounts of activities administered to patients in nuclear medicine theragnostics are in the range of tens to several hundred megabecquerels for imaging studies and a couple to several gigabecquerels for therapeutic purposes, whereas in this study the sample activities were in the range of 4–162 MBq for diagnostic radionuclides and 9–312 MBq for therapeutic radionuclides (see values per radionuclide in Table 1). Linearity effects, which are typically in the range of ± 1% to few percent [3, 5], become more important for the much broader range of activities measured in clinical applications. Also, in clinical practice, therapeutic and diagnostic radionuclides are often not measured using the same (sample) measurement geometry. For instance, 90Y is often assayed using manufacturer-supplied vials and/or acrylic shields. Indeed, the (combined) errors in theragnostic activity measurements will depend on the specific measurement settings used for each radionuclide. Moreover, the response of a radionuclide calibrator to 90Y also depends on the physicochemical form of the 90Y compound [26]. In this study, 90Y samples were prepared based on a 90Y chloride aqueous solution. Yet, in liver radioembolization procedures, which represent the main clinical application of the theragnostic pair 90Y/99mTc, 90Y is administered to patients in the form of suspensions of resin/glass microspheres. Activity measurements of 90Y microspheres may require the use of different calibration factors and present further challenges whose associated errors might not be reflected in the overall measurement performance obtained here using 90Y chloride. This intercomparison showed that, while 99mTc, 131I, and 177Lu activity measurements are mostly accurate, there is still significant room for improvement for 111In, 123I, 124I, and 90Y. For these radionuclides, the radionuclide calibrator response is particularly sensitive to the sample and detector geometry. Consequently, substantial over- or underdosing (> ± 10%) of therapeutic administrations is likely to occur in a theragnostic setting. A key message from this intercomparison is that, prior to clinical release, radionuclide calibration factors and sample geometry correction factors should be verified for each radionuclide and sample configuration used in practice. A unified international standard for testing and calibrating medical radionuclide calibrators is pressingly needed to boost the implementation of quantitative accuracy in nuclear medicine theragnostics. The data that support the findings of this study are available from the corresponding author RW upon reasonable request and with permission of the institution where measurement data was acquired. IQR: Inter-quartile range JRC: Joint Research Centre NPL: National Physical Laboratory Positron emission tomography SCK CEN: Belgian Nuclear Research Centre SPECT: Single photon emission computed tomography Eberlein U, Cremonesi M, Lassmann M. Individualized dosimetry for theragnostics: necessary, nice to have, or counterproductive? J Nucl Med. 2017;58:97S–103S. Herrmann K, Schwaiger M, Lewis JS, Solomon SB, McNeil BJ, Baumann M, et al. Radiotheranostics: a roadmap for future development. Lancet Oncol. 2020;21(3):e146–56. https://doi.org/10.1016/S1470-2045(19)30821-6. Capintec. CRC-25R owner's manual. Florham Park: Capintec Inc; 2017. Laedermann JP, Valley JF, Bulling S, Bochud FO. Monte Carlo calculation of the sensitivity of a commercial dose calibrator to gamma and beta radiation. Med Phys. 2004;31(6):1614–22. Gadd R, Baker M, Nijran KS, Owens S, Thomson W, Woods MJ, et al. Protocol for establishing and maintaining the calibration of medical radionuclide calibrators and their quality control, measurement good practice guide no. 93. National Physical Laboratory: Teddington; 2006. IAEA. Quality assurance for radioactivity measurement in nuclear medicine, Technical reports series no 454. Vienna: International Atomic Energy Agency; 2006. Busemann Sokole E, Plachcínska A, Britten A. EANM Physics Committee. Acceptance testing for nuclear medicine instrumentation. Eur J Nucl Med Mol Imaging. 2010;37:672–81. AAPM. The selection, use, calibration and quality assurance of radionuclide calibrators used in nuclear medicine, Report of AAPM Task Group 181. College Park: American Association of Physicists in Medicine; 2012. Saldarriaga Vargas C, Rodríguez Pérez S, Baete K, Pommé S, Paepen J, Van Ammel R, et al. Intercomparison of 99mTc, 18F and 111In activity measurements with radionuclide calibrators in Belgian hospitals. Phys Med. 2018;45:134–42. Bauwens M, Pooters I, Cobben R, Vissera M, Schnerra R, Mottaghy F, et al. A comparison of four radionuclide dose calibrators using various radionuclides and measurement geometries clinically used in nuclear medicine. Phys Med. 2019;60:14–21. Bailey DL, Hofman MS, Forwood NJ, O'Keefe GJ, Scott AM, van Wyngaardt WM, et al. Accuracy of dose calibrators for 68Ga PET imaging: unexpected findings in a multicenter clinical pretrial assessment. J Nucl Med. 2018;59(4):636–8. Ferreira KM, Fenwick AJ. 123I intercomparison exercises: assessment of measurement capabilities in UK hospitals. Appl Radiat Isot. 2018;134:108–11. Fenwick A, Baker M, Ferreira K, Keightley J. Comparison of 90Y and 177Lu measurement capability in UK and European hospitals. Appl Radiat Isot. 2014;87:10–3. MacMahon D, Townley J, Bakhshandeiar E, Harms AV. Comparison of Tc-99 m measurements in UK hospitals, 2006, NPL Report DQL-RN 018. National Physical Laboratory: Teddington; 2007. Ciocanel M, Keightley JD, Scott CJ, Woods MJ. Intercomparisons of 131I solution and capsule sources in UK hospitals, 1999, NPL report CIRM 31. National Physical Laboratory: Teddington; 1999. Stokke C, Minguez Gabiña P, Solný P, Cicone F, Sandström M, Sjögreen Gleisner K, et al. Dosimetry-based treatment planning for molecular radiotherapy: a summary of the 2017 report from the Internal Dosimetry Task Force. EJNMMI Phys. 2017;4(1):27. https://doi.org/10.1186/s40658-017-0194-3. ISO. Determination of the detection limit and decision threshold for ionizing radiation measurements — part 3: fundamentals and application to counting measurements by high resolution gamma spectrometry, without the influence of sample treatment, EN ISO 11929-3:2000. Geneva: International Organization for Standardization; 2000. Pommé S. Methods from primary standardization of activity. Metrologia. 2007;44:17–26. ISO. Injection Containers and Accessories. Injection Vials Made of Glass Tubing, EN ISO 8362-1:2009. Geneva: International Organization for Standardization; 2010. Brookhaven National Laboratory, https://www.nndc.bnl.gov/nudat2/. Accessed 15 October 2020. Townson R, Tessier F, Galea R. EGSnrc calculation of activity calibration factors for the Vinten ionization chamber. Appl Radiat Isot. 2018;134:100–4. Peitl PK, Tomse P, Kroseli M, Socana A, Hojkera S, Pecar S, et al. Influence of radiation source geometry on determination of 111In and 90Y activity of radiopharmaceuticals. Nucl Med Commun. 2009;30(10):807-814. Beattie BJ, Pentlow KS, O'Donoghue J, Humm JL. A recommendation for revised dose calibrator measurement procedures for 89Zr and 124I. PLoS One. 2014;9(9):e106868. Kowalsky RJ, Johnston RE. Dose calibrator assay of iodine-123 and indium-111 with a copper filter. J Nucl Med Technol. 1998;26(2):94–8. Gear JI, Cox MG, Gustafsson J, Sjögreen Gleisner K, Murray I, Glatting G, et al. EANM practical guidance on uncertainty analysis for molecular radiotherapy absorbed dose calculations. Eur J Nucl Med Mol Imaging. 2018;45(13):2456–74. Ferreira KM, Fenwick AJ, Arinc A, Johansson LC. Standardization of 90Y and determination of calibration factors for 90Y microspheres (resin) for the NPL secondary ionization chamber and a Capintec CRC-25R. Appl Radiat Isot. 2016;109:226–30. The authors kindly thank IDB Holland for providing 177Lu without charge, Andrew Fenwick (NPL) for the helpful discussions regarding the activity standardizations, Reid Townson (National Research Council Canada) for providing simulation data on the reference chambers' energy dependence, Mikael Hult (JRC) for logistic support, and GE Healthcare for providing packing materials without charge. This study was funded by Maastricht University Medical Center and the Belgian Nuclear Research Centre. Radiation Protection Dosimetry and Calibrations, Belgian Nuclear Research Centre (SCK CEN), Mol, Belgium Clarita Saldarriaga Vargas In vivo Cellular and Molecular Imaging, Vrije Universiteit Brussel, Jette, Belgium Clarita Saldarriaga Vargas & Peter Covens Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, P.O. Box 5800, 6202, AZ, Maastricht, The Netherlands Matthias Bauwens, Ivo N. A. Pooters, Felix M. Mottaghy, Joachim E. Wildberger & Roel Wierts European Commission, Joint Research Centre (JRC), Geel, Belgium Stefaan Pommé Department of Radiology, Nuclear Medicine and Anatomy, Radboudumc, Nijmegen, The Netherlands Steffie M. B. Peters & Martin Gotthardt Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands Marcel Segbers Department of Nuclear Medicine, University of Duisburg-Essen, Essen, Germany Walter Jentzen Department of Nuclear Medicine, University Hospital RWTH Aachen University, Aachen, Germany Andreas Vogg & Felix M. Mottaghy Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands Floris H. P. van Velden Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Utrecht, The Netherlands Sebastiaan L. Meyer Viol Matthias Bauwens Ivo N. A. Pooters Steffie M. B. Peters Andreas Vogg Martin Gotthardt Felix M. Mottaghy Joachim E. Wildberger Peter Covens Roel Wierts RW, MB, CSV, MG, FMM, and JEW contributed to the conception and design of the study. RW, CSV, INAP, SMBP, MS, WJ, AV, FHPV, SLMV, and PC acquired (part of) the radionuclide calibrator data. CSV and SP determined the reference activity of the samples. CSV, RW, and MB analyzed and interpreted the intercomparison data and drafted the manuscript. All authors critically revised the manuscript. The authors read and approved the final manuscript. Correspondence to Roel Wierts. Additional file 1. Saldarriaga Vargas, C., Bauwens, M., Pooters, I.N.A. et al. An international multi-center investigation on the accuracy of radionuclide calibrators in nuclear medicine theragnostics. EJNMMI Phys 7, 69 (2020). https://doi.org/10.1186/s40658-020-00338-3 Activity measurement Radionuclide calibrator Theragnostics
CommonCrawl
The effects of a rise in cigarette price on cigarette consumption, tobacco taxation revenues, and of smoking-related deaths in 28 EU countries-- applying threshold regression modelling Chun-Yuan Yeh1, Christian Schafferer1, Jie-Min Lee2, Li-Ming Ho3 & Chi-Jung Hsieh4 European Union public healthcare expenditure on treating smoking and attributable diseases is estimated at over €25bn annually. The reduction of tobacco consumption has thus become one of the major social policies of the EU. This study investigates the effects of price hikes on cigarette consumption, tobacco tax revenues and smoking-caused deaths in 28 EU countries. Employing panel data for the years 2005 to 2014 from Euromonitor International, the World Bank and the World Health Organization, we used income as a threshold variable and applied threshold regression modelling to estimate the elasticity of cigarette prices and to simulate the effect of price fluctuations. The results showed that there was an income threshold effect on cigarette prices in the 28 EU countries that had a gross national income (GNI) per capita lower than US$5418, with a maximum cigarette price elasticity of −1.227. The results of the simulated analysis showed that a rise of 10% in cigarette price would significantly reduce cigarette consumption as well the total death toll caused by smoking in all the observed countries, but would be most effective in Bulgaria and Romania, followed by Latvia and Poland. Additionally, an increase in the number of MPOWER tobacco control policies at the highest level of achievment would help reduce cigarette consumption. It is recommended that all EU countries levy higher tobacco taxes to increase cigarette prices, and thus in effect reduce cigarette consumption. The subsequent increase in tobacco tax revenues would be instrumental in covering expenditures related to tobacco prevention and control programs. Smoking prevalence and tobacco control In 2017, every fourth EU citizen smoked, and the annual death toll from smoking was approximately 700,000 [1]. EU public healthcare expenditure on treating smoking and attributable diseases is estimated at over €25bn annually, constituting a considerable burden on public health care systems. Moreover, €8bn is reportedly lost annually in productivity from deaths, absenteeism and early retirement linked to smoking [2]. The reduction of tobacco consumption has thus become one of the major social policies of the EU and the necessity of implementing the six MPOWER tobacco control policies proposed by the World Health Organisation (WHO) in 2008: (i) increases in the tobacco tax; (ii) monitoring of tobacco usage; (iii) support for quitters; (iv) creation of a smoking-free environment; (v) warning against the dangers of tobacco; (vi) and banning tobacco advertising, promotion and sponsorship. According to Levy et al., if 41 countries across the world had implemented at least one MPOWER policy between 2007 and 2010, the number of smokers would have been cut by 14.8 million, and 7.4 million would have avoided death caused by smoking [3]. In particular, increased tobacco taxation would have saved 3.5 million people, suggesting that taxation would have been the most effective single intervention to reduce demands for cigarettes by raising cigarette prices [3]. The relationship between tobacco price and consumption is also illustrated by the fact that EU countries with lower cigarette prices tend to have higher rates of smoking [4]. Moreover, unlike other policy tools, such as bans on tobacco advertising, taxation not only effectively decreases tobacco consumption but also has the beneficial side effect of increasing national tax revenues [5, 6]. Price elasticities and cigarette demand The effectiveness of tax increases on cigarette consumption is mainly determined by cigarette price elasticity. Although numerous studies have demonstrated that cigarette price elasticities of low- and middle-income are higher than in high-income countries [7], other studies have shown that price elasticity in several developing countries were similar to those of developed countries [8, 9]. Differences in cigarette price elasticities may have been caused by applying different demand functions, information patterns, and estimation methods [10]. Therefore, employing the same demand function, information pattern, and estimation method can facilitate a standardized comparison of the price elasticity of demand for cigarettes among different countries. Studies have also shown that there are geographical variations in smoking behaviour [11, 12]. In Eastern European countries, such as Slovenia, Romania and Slovakia, tobacco prevalence in rural and remote areas is higher than in urban areas (World Bank. World Development Indicators (WDI)), whereas in Western European countries, such as Germany, Sweden, Finland and Denmark, the opposite has been reported [13]. Smokers living in rural and remote areas tend to have a lower social and economic status and are more sensitive to price fluctuations [14, 15]; hence, this group reportedly is more likely to opt for lower-priced products, or consider cessation [16, 17]. Moreover, cigarette prices in Eastern European countries are considerably lower than in other parts of Europe, leading to increased illicit trading in cigarettes and to subsequent changes in consumption patterns throughout the European Union [18]. Numerous previous studies have adopted a linear model to estimate cigarette demand structure [4, 9, 19]. However, this estimation method might fail to fully show the price-volume relationship. Huang and Yang estimated the cigarette demand relationship in all states across the US and found that there were income threshold effects in the demand for cigarettes [20], which indicates that there are differences in the cigarette price elasticity of demand at different income thresholds. The goals of this study This study employed threshold regression modelling and used income as a threshold variable to estimate the price elasticity of cigarette demand. Furthermore, a cigarette price increase of 10% was used to analyse the effects of price increases on cigarette consumption, tobacco taxation, and the death toll of smoking. The findings of this study may serve as an important reference for EU health management authorities to revise tobacco prevention and control policies. Study design and data In this study, data of all 28 EU countries were collected to construct a cigarette demand structure model. One dependent variable and five independent variables were considered. Per capita cigarette consumption for those aged 15 and over was chosen as the dependent variable. Independent variables comprised cigarette prices, cigarette prices in Eastern European countries, gross national income (GNI), rural population, and the number of MPOWER measures implemented at the highest level of achievement. Data regarding cigarette consumption, cigarette prices, and cigarette prices in Eastern European countries were extracted from the 2005–2014 Euromonitor International market research database [21]. Euromonitor International is recognized as a leading independent provider of global business intelligence, specialized in creating worldwide data and analysis on consumer products and services. Consumption of cigarette products was calculated based on annual cigarette consumption per capita for those aged 15 and over. The retail price for a pack of cigarettes in each country was calculated by dividing cigarette sales revenues by cigarette consumption, which was further deflated using consumer price indexes. Cigarette price in Eastern European countries refers to the combined average cigarette price in Estonia, Latvia, Lithuania, Poland, Czech Republic, Slovakia, Hungary, Romania, Slovenia, Croatia and Bulgaria. GNI per capita data were converted to US dollars using the World Bank Atlas method [22], divided by the midyear population, and deflated based on consumer price indexes. Required data were retrieved from the World Bank's database. In this study, the ratio of the rural population to the total population was used in the analysis. Rural population refers to the number of people living in rural areas as defined by the National Statistical Offices and was calculated as the difference between the total population and the urban population. Data on the ratio of the rural population to the total population are World Bank estimates [22] and based on the United Nations, World Urbanization Prospects [23]. As to the number of MPOWER measures implemented at the highest level of achievement by each country in each year, figures for the years 2007 to 2014 were taken from the 2015 WHO report on global tobacco epidemic [17]. Data for the years 2005, 2006 and 2015 were unavailable and treated as missing data in the analysis. Data characteristics Table 1 shows the list of variables used in the analysis and the data characteristics. In 2014, per capita cigarette consumption in the 28 EU countries for adults aged 15 years and over was the highest in Slovenia at 2098 cigarettes, followed by the Czech Republic (1720 cigarettes), Austria (1631 cigarettes), Greece (1552 cigarettes), and Romania (1501 cigarettes); those of the other EU countries were all less than 1500 cigarettes. In 2014, the average real retail price was the highest in United Kingdom at US$9.48 per pack, followed by Ireland (US$9.04). In addition, the average highest number of MPOWER measures implemented in Ireland, Spain, and the United Kingdom was 3, followed by Belgium, Bulgaria, Denmark, Greece, Malta, and the Netherlands at 2; whereas in the remaining EU countries the number of measures were below 2. Table 1 Comparison of cigarette consumption, retail prices and gross national income from 2005 to 2014 in the European Union Empirical specification and analysis To calculate cigarette price elasticity, a cigarette demand structure model was constructed using cigarette consumption as the dependent variable and cigarette price, cigarette prices in Eastern Europe countries, GNI, rural population, and the highest number of MPOWER measures implemented as explanatory variables. Cigarette price elasticity was estimated with income as the threshold variable using the threshold regression model of panel data from Hansen [24]. The baseline cigarette demand structure model of the 28 EU countries is as follows: $$ \ln {C}_{it}={\beta}_{1i}+{\beta}_2\ln {P}_{it}+{\beta}_3\ln {GNI}_{it}+{\beta}_4{Rural}_{it}+{\beta}_5{MP}_{it}+{\beta}_6\ln {NeiP}_{it}+{\varepsilon}_{it} $$ C it : the annual cigarette consumption per capita in the population aged 15 years and over in country i in year t. P it : the cigarette price per cigarette in country i in year t. GNI it : the per capita national income in country i in year t. Rural it : the rural population percentage in country i in year t. MP it : the highest number of MPOWER measures implemented in country i in year t. NeiP it : cigarette prices per cigarette in Eastern European country i in year t. Formula (1) is the traditional constant-elasticity log-linear demand model, but the influence of cigarette prices on cigarette consumption may not be limited to a single pattern. That is, there may also be non-linear structural relationships, such as income threshold effects on the demand for tobacco products [20]. We thus used income as the threshold variable in the threshold regression model to estimate the elasticity of cigarette prices and to simulate the effects of price fluctuations. One feature of a threshold regression model is that threshold variables are ordered so as to be a structure breakpoint of regime variation. The estimated reference points are divided into different regimes by variable value, which is greater or smaller than the threshold value. The panel threshold regression model is often "de-meaned" first, in order to eliminate the individual effect β i [24]. If our baseline model contains three regimes of national incomes that are conditional on two threshold values, Eq. 1 can be derived as $$ {\displaystyle \begin{array}{l}\ln {C}_{it}^{\ast }={\beta}_{21}\ln {P}_{it}^{\ast}\left( GNI\le {\gamma}_1\right)+{\beta}_{22}\ln {P}_{it}^{\ast}\left({\gamma}_1\le GNI\le {\gamma}_2\right)+{\beta}_{23}\ln {P}_{it}^{\ast}\left( GNI>{\gamma}_2\right)+\\ {}\kern3.12em {\beta}_{31}\ln {GNI}_{it}^{\ast}\left( GNI\le {\gamma}_1\right)+{\beta}_{32}\ln {GNI}_{it}^{\ast}\left({\gamma}_1\le GNI\le {\gamma}_2\right)+{\beta}_{33}\ln {GNI}_{it}^{\ast}\left( GNI>{\gamma}_2\right)\\ {}\kern2.84em +\kern0.72em {\beta}_4{Rural}_{it}^{\ast }+{\beta}_5{MP}_{it}^{\ast }+{\beta}_6\ln {NeiP}_{it}^{\ast }+{\varepsilon}_{it}^{\ast}\end{array}} $$ Where \( {\overline{C}}_i={T}^{-1}{\sum}_{t=1}^T{C}_{it} \); \( {C}_{it}^{\ast }={C}_{it}-\overline{C_i} \); \( {P}_{it}^{\ast }={P}_{it}-\overline{P_i} \); \( {GNI}_{it}^{\ast }={GNI}_{it}-{\overline{GNI}}_i \); \( {Rural}_{it}^{\ast }={Rural}_{it}-{\overline{Rural}}_i \); \( {MP}_{it}^{\ast }={MP}_{it}-{\overline{MP}}_i \); \( {NeiP}_{it}^{\ast }={NeiP}_{it}-{\overline{NeiP}}_i \); \( {\varepsilon}_{it}^{\ast }={\varepsilon}_{it}-{\overline{\varepsilon}}_i \); and γ1 and γ2 are the two threshold values that control GNI it . We use ordinary least squares (OLS) to estimate Eq. 2. The selection of threshold variables in the empirical model can be determined by economic theory or statistical testing. In the process of statistical testing, the null hypothesis (H0: β i are all the same) maintains that a traditional log-linear model is sufficient. In this article, we applied the likelihood ratio to test the nonlinear relationship. In such cases, further testing is required to determine single, double or triple thresholds. In order to avoid heteroskedasticity, the inconsistency of standard errors caused by serial correlations and heterogeneity of the residual terms, consistent correction was performed according to White (1980) [25]. Statistical software packages Gauss and Stata were used to perform the analysis. To determine the effects of cigarette price hikes on cigarette consumption, cigarette consumption in 2015 was set as the baseline for this study. We introduced 10% increments in cigarette prices to simulate changes in future cigarette consumption based on the cigarette price elasticity estimated in this study. Changes in tobacco tax revenues were calculated based on changes in consumption due to price increases. The number of averted smoking-attributable deaths (SADs) derived from the simulated impact of price increments on the reduction in smokers and was adjusted for the fact that smoking cessation still carries considerable risks of early death [26]. The applied mortality adjustment factors were calculated for each country surveyed, assuming that 95, 75, 70, 50 and 10% of those who ceased smoking when aged 15 to 29, 30 to 39, 40 to 49, 50 to 59 and at least 60 years, respectively, would remain unaffected by their previous smoking habits [27]. Data on population stratification were extracted from the Eurostat database. Regression results Table 2 shows the test results of the threshold effect. For the observed EU countries, the test for one threshold had a value of 43.66, suggesting that price elasticities may indeed fluctuate as a result of changing income levels. The panel threshold model seems thus to be more appropriate to apply in this study than the traditional panel data model. Next, the value of the LR with two thresholds was 36.61, which allowed us to reject the null hypothesis that there was no presence of a threshold effect at the 10% significance level. Furthermore, there were two significant income threshold values in cigarette prices in the EU at US$5418 and at US$8385. As the sum of squared errors (SSE) of the double-threshold model (0.311) is lower than that of the single-threshold model (0.355), the double-threshold model seems to be more appropriate. Table 2 Threshold model estimates This study thus used three income regimes with different GNI per capita in the analysis. As illustrated in Table 3, Bulgaria and Romania were assigned to the first, Latvia and Poland to the second, and the remaining 24 EU countries to the third income regime. Table 3 Impact of cigarette price increases by 10% on cigarette consumption, tax revenue, and reduction in smoking-attributable deaths Elasticity estimates Our model showed differences in cigarette price elasticity figures of each income threshold (Table 2). When GNI per capita was lower than US$5414 (Regime 1), cigarette price elasticity was the highest at −1.227 and income elasticity the lowest at 0.282. When GNI per capita was between US$5414 and US$8385 (Regime 2), cigarette price elasticity reached −0.829 and income elasticity 0.423. When GNI per capita was higher than US$8385 (Regime 3), cigarette price elasticity was the lowest at −0.503 and income elasticity the highest at 0.576. In addition, among the 28 EU countries cigarette prices in Eastern European countries (\( {NeiP}_{it}^{\ast } \)) had a negative impact on cigarette consumption (−0.057), indicating that lower cigarette prices in Eastern European countries led to higher domestic consumption. The number of MPOWER measures implemented at the highest level of achievement had a negative and statistically significant impact on cigarette consumption. Moreover, living in rural areas had a positive and statistically significant impact on cigarette consumption. Effect of cigarette prices on cigarette consumption, tobacco tax revenue, and smoking-related deaths Results of the administered price simulation showed that increases in cigarette prices (10%) would reduce cigarette consumption the most in Bulgaria and Romania (per pack average price increase US$0.115; consumption reduced: 12.27%), followed by Latvia, and Poland (per pack average price increase US$0.232; consumption reduced:8.29%). The largest group of EU countries (in the following referred to as EU 24) had a smaller reduction in cigarette consumption (consumption reduced: 5.03%) (Table 3). Furthermore, the administered price simulation exhibited different effects on tax revenues among the three income groups. Bulgaria and Romania had a decrease in tax revenues (−1.41%), whereas the EU24 countries had the highest average increase in tax revenues (7.03%). Latvia and Poland, on the other hand, showed the lowest average increase in tax revenues (3.15%) among all EU countries. The simulated tax increase showed a significant impact on the number of averted smoking-attributable deaths (SADs) in all EU countries, but low-income countries were much more affected by the policy measure than the richer EU24 countries. According to the simulation, Bulgaria and Romania would avert over 251,860 deaths, followed by Latvia and Poland with 210,061 people, whereas the number of averted SADs would reach about 1.4 million in the EU24 zone. According to the results of this study, the price elasticity of cigarette demand in the 28 countries comprising the EU ranged from −0.503 to −1.227. Countries with a GNI per capita lower than US$5418 had the highest cigarette price elasticity (−1.227). Countries with a GNI per capita higher than US$5418 had lower cigarette price elasticities. These findings were similar to International Agency for Research on Cancer [28], who showed that low- and middle-income countries had a higher price elasticity. The results of this study thus further emphasise the importance of using economic measures as an intervention tool, especially in countries such as Bulgaria, and Romania, to reduce smoking. In addition, this study estimated that income elasticities of cigarette demand ranged from 0.282 to 0.576. Previously reported estimates ranged between 0.3 and 0.4 [4]. Despite differences in the size of the observed effect, elasticity figures suggest that income growth may have promoted cigarette consumption in the observed EU countries. That is, the effects of price increases on consumption might be offset by income growth in countries with a GNI per capita above US$8385, where estimated cigarette price elasticity was lower than income elasticity. Thus, almost all EU countries would have to increase their cigarette prices substantially to reduce the increase in cigarette consumption that might be caused by income growth offset effects. This study found that an average price increase of 10% throughout the EU would lead to an average increase in revenues by about 6.76%, which is consistent with previously reported results [29]. Moreover, results of this study showed that the average tobacco taxation benefit of all EU countries significantly increased by 6644 million US$ as a result of rising cigarette prices. In the future, increased cigarette prices in all EU countries are likely to reduce further the demand for cigarettes, and the appreciable increase in tobacco taxation revenues could be spent on the prevention and control of cigarette-related diseases. Additionally, this study revealed a negative relationship between the number of MPOWER measures at the highest level of achievement and cigarette consumption, which implies that t the implementation of MPOWER measures can reduce cigarette consumption in the 28 EU countries. In most countries, however, the number of MPOWER measures at the highest level of achievement was below 2, That is, the number of implemented MPOWER tobacco control policies among EU countries was rather small. Therefore, efforts have to be made to implement far more MPOWER tobacco control policies as to achieve greater effects in reducing cigarette consumption as a whole. This study showed that price increases in Bulgaria, and Romania had the greatest effects on reducing cigarette consumption among all EU countries, while experiencing a comparatively high prevalence of smoking [30]. In spite of gradual increases in tobacco taxes as to control tobacco consumption in recent years, cigarette prices in Bulgaria, and Romania have still remained comparatively low [2, 31]. If these two countries continued to increase cigarette prices and the number of MPOWER tobacco control policies, current achievements could be significantly enhanced and funding for prevention and control programmes substantially be improved by larger taxation revenues. Transnational information was analysed in this study. However, the integrated analysis of transnational information may lead to incorrect inferences because different countries have different cigarette consumption structures. We thus suggest that all countries establish a cigarette consumption database and market-monitoring mechanism to undertake long-range tracking and analysis. Illicit trade of tobacco products has not been included in the study, as reliable data could not be obtained for all countries. Moreover, data on cigarette consumption analysed in this study refer to factory-made (FM) cigarettes.Roll-your-own (RYO) tobacco products have become popular in the EU in recent years and may influence consumption behaviour. Further research on price effects may thus address the issue of illicit trade and RYO cigarette use. This study estimated that price elasticities of cigarette demand ranged from −0.503 to −1.227. These figures were negative, indicating that cigarette consumption or demand in 28 EU countries were inelastic during the study period. It is recommended that the 28 EU countries should levy a higher tobacco tax to increase cigarette prices, thus reducing cigarette consumption. The subsequent increase in tobacco tax revenues would be instrumental in covering expenditures related to tobacco prevention and control programs. European Commission. Attitudes of Europeans towards tobacco and electronic cigarette. Eurobarometer: Special Report, 458, 2017. http://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/Survey/getSurveyDetail/instruments/SPECIAL/surveyKy/2146. Accessed 24 July 2017. James Reilly, Minister of Health intervention, ENVI public hearing on the Tobacco Products Directive, 25 February 2013. Levy DL, Ellis JA, Mays D, Huang AT. Smoking-related deaths averted due to three years of policy progress. Bull World Health Organ. 2013;91(7):509–18. Nguyen L, Rosenqvist G, Pekurinen M. Demand for tobacco in Europe. An econometric analysis of 11 countries for the PPACTE Project. National Institute for Health and Welfare: Helsinki; 2012. Ahmad S, Franz GA. Raising taxes to reduce smoking prevalence in the US:A simulation of the anticipated health and economic impacts. Public Health. 2008;122(1):3–10. Goodchild M, Perucic AM, Nargis N. Modelling the impact of increasing tobacco taxes on public health and finance. Bull World Health Organ. 2016;94:250–257. World Health Organisation. WHO technical manual on tobacco tax administration, 2010. http://www.who.int/tobacco/publications/tax_administration/en/. Accessed 2 Feb 2017. Chapman S, Richardson J. Tobacco excise and declining tobacco consumption: the case of Papua New Guinea. Am J Public Health. 1990;80(5):537–40. Hu TW, Mao Z. Effects of cigarette tax on cigarette consumption and the Chinese economy. Tob Control. 2002;11(2):105–8. Gallet CA, List JA. Cigarette demand: a meta-analysis of elasticities. Health Econ. 2003;12(10):821–35. Shohaimi S, Luben R, Wareham N, Day N, Bingham S, Welch A, Oakes S, Khaw KT. Residential area deprivation predicts smoking habit independently of individual educational level and occupational social class. A cross sectional study in the Norfolk cohort of the European Investigation into Cancer (EPIC-Norfolk). J Epidemiol Community Health. 2003;57(4):270–6. Ross CE. Walking, exercising, and smoking: does neighborhood matter? Soc Sci Med. 2000;51(2):265–74. Idris BI, Giskes K, Borrell C, Benach J, Costa G, Federico B, Helakorpi S, Helmert U, Lahelma E, Moussa KM, Ostergren PO, Prättälä R, Rasmussen NK, Mackenbach JP, Kunst AE. Higher smoking prevalence in urban compared to non-urban areas: time trends in six European countries. Health Place. 2007;13(3):702–12. Townsend JL, Roderick P, Cooper J. Cigarette smoking by socioeconomic group, sex, and age: effects of price, income, and health publicity. BMJ. 1994;309(6959):923–7. Centers for Disease Control and Prevention (CDC). Response to increases in cigarette prices by race/ethnicity, income, and age groups--United States, 1976-1993. MMWR Morb Mortal Wkly Rep. 1998;47(29):605–9. Beyond Smoking Kills. Protecting Children, Reducing Inequalities. London: ASH, 2008. World Health Organization. WHO Report on the Global Tobacco Epidemic, 2009–2015. Geneva; WHO, 2009–2016. Johnston N, Kegö W, Wenngren C. Cigarette smuggling: Poland to Sweden. Institute for Security and Development Policy: Stockholm; 2016. Chen SH, Lee JM, Liu HH, Wang HC, Ye CY. The cross-effects of cigarette and betel nut consumption in Taiwan: have tax increases made a difference? Health Policy Plan. 2011;26(3):266–73. Huang BN, Yang CW. Demand for cigarettes revisited: an application of the threshold regression model. Agric Econ. 2006;34(1):81–6. Euromonitor International (database online). Tobacco: global. Passport database. Euromonitor. London: Euromonitor; 2014. http://www.euromonitor.com/tobacco World Bank. World Development Indicators (WDI). Washington, DC: World Bank; 2014. United Nations. World urbanization prospects: the 2014 revision. New York: United Nations Population Division; 2014. Hansen BE. Threshold effects in non-dynamic panels: Estimation, testing, and inference. J Econom. 1999;93(2):345–68. White H. A heteroscedasticity-consistent covariance matrix estimator and a direct test for heteroscedasticity. Econom. 1980;48(4):817–38. Ranson K, Jha P, Chaloupka FJ, Nguyen SN. The effectiveness and cost-effectiveness of price and other tobacco control policies. In: Jha P, Chaloupka FJ, editors. Tobacco control in developing countries. Oxford: Oxford University Press; 2000. p. 427–47. Goodchild M, Perucic AM, Nargis N. Modelling the impact of raising tobacco taxes on public health and finance. Bull World Health Organ. 2016;94(4):250–7. International Agency for Research on Cancer. Effectiveness of tax and price policies for tobacco control. IARC handbooks of cancer prevention: tobacco control. Volume 14. Lyon: International Agency for Research on Cancer; 2011. Waters H, Sáenz de Miera B, Ross H, Reynales Shigematsu LM. The economics of tobacco and tobacco taxation in Mexico. Paris: International Union Against Tuberculosis and Lung Disease; 2010. European Commission. Survey on tobacco - analytical report. Flash Eurobarometer No 253, 2009. http://ec.europa.eu/public_opinion/flash/fl_253_en.pdf. (Accessed 2 Feb 2017). Jassem J, Przewoźniak K, Zatoński W. Tobacco control in Poland—successes and challenges. Transl Lung Cancer Res. 2014;3(5):280–5. Authors would like to thank the the Ministry of Science and Technology, Taiwan for their support in the conduct of this study. We also thank all those who were involved in this study for their contribution and commitment throughout the study. This study was funded by a grant through the the Ministry of Science and Technology (Grant number: NSC 103–2410-H-022-011-). Data supporting the results reported in the article can be obtained from the corresponding author. Department of International Trade, Overseas Chinese University, Taichung, Taiwan Chun-Yuan Yeh & Christian Schafferer Department of Shipping and Transportation Management, National Kaohsiung Marine University, 142, Hai-Chuan Rd. Nan-Tzu, Kaohsiung, Taiwan Jie-Min Lee Department of Marine Leisure Management, National Kaohsiung Marine University, Kaohsiung, Taiwan Li-Ming Ho Department of Finance, National Changhua University of Education, Changhua, Taiwan Chi-Jung Hsieh Chun-Yuan Yeh Christian Schafferer CYY and JML performed the calculations and analyses reported in the text. JML and LMH reviewed the literature for relevant data and documentation. JML and CYY drafted the manuscript which was edited and critically revised by CYY, CS and CJH. All authors read and approved the final manuscript. Correspondence to Jie-Min Lee. This article does not contain any studies with human participants performed by any of the authors. We received permission to use the data for this study from Euromonitor International. The author(s) declare that they have no competing interests. Yeh, CY., Schafferer, C., Lee, JM. et al. The effects of a rise in cigarette price on cigarette consumption, tobacco taxation revenues, and of smoking-related deaths in 28 EU countries-- applying threshold regression modelling. BMC Public Health 17, 676 (2017). https://doi.org/10.1186/s12889-017-4685-x DOI: https://doi.org/10.1186/s12889-017-4685-x Cigarette price Cigarette consumption Threshold regression model-Smoking-attributable mortality
CommonCrawl
Nutrition Research and Practice The Korean Nutrition Society (한국영양학회) 2005-6168(eISSN) Agriculture, Fishery and Food > Food and Nutrition Science Nutrition Research and Practice (NRP) is an official journal, jointly published by the Korean Nutrition Society and the Korean Society of Community Nutrition since 2007. The journal had been published quarterly at the initial stage and has been published bimonthly since 2010. NRP aims to stimulate research and practice across diverse areas of human nutrition. The Journal publishes peer-reviewed original manuscripts on nutrition biochemistry and metabolism, community nutrition, nutrition and disease management, nutritional epidemiology, nutrition education, institutional food service in the following categories: Original Research Articles, Notes, Communications, and Reviews. Reviews will be received by the invitation of the editors only. Statements made and opinions expressed in the manuscripts published in this Journal represent the views of authors and do not necessarily reflect the opinion of the Societies. This journal is indexed/tracked/covered by PubMed, PubMed Central, Science Citation Index Expanded (SCIE), SCOPUS, Chemical Abstracts Service (CAS), CAB International (CABI), KoreaMed, Synapse, KoMCI, CrossRef and Google Scholar. http://www.nrpesubmit.org/ KSCI KCI SCOPUS SCIE Chestnut extract induces apoptosis in AGS human gastric cancer cells Lee, Hyun-Sook;Kim, Eun-Ji;Kim, Sun-Hyo 185 https://doi.org/10.4162/nrp.2011.5.3.185 PDF KSCI In Korea, chestnut production is increasing each year, but consumption is far below production. We investigated the effect of chestnut extracts on antioxidant activity and anticancer effects. Ethanol extracts of raw chestnut (RCE) or chestnut powder (CPE) had dose-dependent superoxide scavenging activity. Viable numbers of MDA-MD-231 human breast cancer cells, DU145 human prostate cancer cells, and AGS human gastric cancer cells decreased by 18, 31, and 69%, respectively, following treatment with $200{\mu}g/mL$ CPE for 24 hr. CPE at various concentrations ($0-200{\mu}g/mL$) markedly decreased AGS cell viability and increased apoptotic cell death dose and time dependently. CPE increased the levels of cleaved caspase-8, -7, -3, and poly (ADP-ribose) polymerase in a dose-dependent manner but not cleaved caspase-9. CPR exerted no effects on Bcl-2 and Bax levels. The level of X-linked inhibitor of apoptosis protein decreased within a narrow range following CPE treatment. The levels of Trail, DR4, and Fas-L increased dose-dependently in CPE-treated AGS cells. These results show that CPE decreases growth and induces apoptosis in AGS gastric cancer cells and that activation of the death receptor pathway contributes to CPE-induced apoptosis in AGS cells. In conclusion, CPE had more of an effect on gastric cancer cells than breast or prostate cancer cells, suggesting that chestnuts would have a positive effect against gastric cancer. Effects of Panicum miliaceum L. extract on adipogenic transcription factors and fatty acid accumulation in 3T3-L1 adipocytes Park, Mi-Young;Seo, Dong-Won;Lee, Jin-Young;Sung, Mi-Kyung;Lee, Young-Min;Jang, Hwan-Hee;Choi, Hae-Yeon;Kim, Jae-Hyn;Park, Dong-Sik 192 The dietary intake of whole grains is known to reduce the incidence of chronic diseases such as obesity, diabetes, cardiovascular disease, and cancer. To investigate whether there are anti-adipogenic activities in various Korean cereals, we assessed water extracts of nine cereals. The results showed that treatment of 3T3-L1 adipocytes with Sorghum bicolor L. Moench, Setaria italica Beauvois, or Panicum miliaceum L. extract significantly inhibited adipocyte differentiation, as determined by measuring oil red-O staining, triglyceride accumulation, and glycerol 3-phosphate dehydrogenase activity. Among the nine cereals, P. miliaceum L. showed the highest anti-adipogenic activity. The effects of P. miliaceum L. on mRNA expression of peroxisome proliferator-activated receptor-${\gamma}$, sterol regulatory element-binding protein 1, and the CCAAT/enhancer binding protein-${\alpha}$ were evaluated revealing that the extract significantly decreased the expression of these genes in a dose-dependent manner. Moreover, P. miliaceum L. extract changed the ratio of monounsaturated fatty acids to saturated fatty acids in adipocytes, which is related to biological activity and cell characteristics. These results suggest that some cereals efficiently suppress adipogenesis in 3T3-L1 adipocytes. In particular, the effect of P. miliaceum L. on adipocyte differentiation is associated with the downregulation of adipogenic genes and fatty acid accumulation in adipocytes. Herbal extract THI improves metabolic abnormality in mice fed a high-fat diet Han, So-Ra;Oh, Ki-Sook;Yoon, Yoo-Sik;Park, Jeong-Su;Park, Yun-Sun;Han, Jeong-Hye;Jeong, Ae-Lee;Lee, Sun-Yi;Park, Mi-Young;Choi, Yeon-A;Lim, Jong-Seok;Yang, Young 198 Target herbal ingredient (THI) is an extract made from two herbs, Scutellariae Radix and Platycodi Radix. It has been developed as a treatment for metabolic diseases such as hyperlipidemia, atherosclerosis, and hypertension. One component of these two herbs has been reported to have anti-inflammatory, anti-hyperlipidemic, and anti-obesity activities. However, there have been no reports about the effects of the mixed extract of these two herbs on metabolic diseases. In this study, we investigated the metabolic effects of THI using a diet-induced obesity (DIO) mouse model. High-fat diet (HFD) mice were orally administered daily with 250 mg/kg of THI. After 10 weeks of treatment, the THI-administered HFD mice showed reduction of body weights and epididymal white adipose tissue weights as well as improved glucose tolerance. In addition, the level of total cholesterol in the serum was markedly reduced. To elucidate the molecular mechanism of the metabolic effects of THI in vitro, 3T3-L1 cells were treated with THI, after which the mRNA levels of adipogenic transcription factors, including C/$EBP{\alpha}$ and $PPAR{\gamma}$, were measured. The results show that the expression of these two transcription factors was down regulated by THI in a dose-dependent manner. We also examined the combinatorial effects of THI and swimming exercise on metabolic status. THI administration simultaneously accompanied by swimming exercise had a synergistic effect on serum cholesterol levels. These findings suggest that THI could be developed as a supplement for improving metabolic status. Exercise training and selenium or a combined treatment ameliorates aberrant expression of glucose and lactate metabolic proteins in skeletal muscle in a rodent model of diabetes Kim, Seung-Suk;Koo, Jung-Hoon;Kwon, In-Su;Oh, Yoo-Sung;Lee, Sun-Jang;Kim, Eung-Joon;Kim, Won-Kyu;Lee, Jin;Cho, Joon-Yong 205 Exercise training (ET) and selenium (SEL) were evaluated either individually or in combination (COMBI) for their effects on expression of glucose (AMPK, PGC- $1{\alpha}$, GLUT-4) and lactate metabolic proteins (LDH, MCT-1, MCT-4, COX-IV) in heart and skeletal muscles in a rodent model (Goto-Kakisaki, GK) of diabetes. Forty GK rats either remained sedentary (SED), performed ET, received SEL, ($5\;{\mu}mol{\cdot}kg$ body $wt^{-1}{\cdot}day^{-1}$) or underwent both ET and SEL treatment for 6 wk. ET alone, SEL alone, or COMBI resulted in a significant lowering of lactate, glucose, and insulin levels as well as a reduction in HOMA-IR and AUC for glucose relative to SED. Additionally, ET alone, SEL alone, or COMBI increased glycogen content and citrate synthase (CS) activities in liver and muscles. However, their effects on glycogen content and CS activity were tissue-specific. In particular, ET alone, SEL alone, or COMBI induced upregulation of glucose (AMPK, PGC-la, GLUT-4) and lactate (LDH, MCT-1, MCT-4, COX-IV) metabolic proteins relative to SED. However, their effects on glucose and lactate metabolic proteins also appeared to be tissue-specific. It seemed that glucose and lactate metabolic protein expression was not further enhanced with COMBI compared to that of ET alone or SEL alone. These data suggest that ET alone or SEL alone or COMBI represent a practical strategy for ameliorating aberrant expression of glucose and lactate metabolic proteins in diabetic GK rats. High glucose diets shorten lifespan of Caenorhabditis elegans via ectopic apoptosis induction Choi, Shin-Sik 214 Diets based on carbohydrates increase rapidly the blood glucose level due to the fast conversion of carbohydrates to glucose. High glucose diets have been known to induce many lifestyle diseases. Here, we demonstrated that high glucose diet shortened the lifespan of Caenorhabditis elegans through apoptosis induction. Control adult groups without glucose diet lived for 30 days, whereas animals fed 10 mg/L of D-glucose lived only for 20 days. The reduction of lifespan by glucose diet showed a dose-dependent profile in the concentration range of glucose from 1 to 20 mg/L. Aging effect of high glucose diet was examined by measurement of response time for locomotion after stimulating movement of the animals by touching. Glucose diet decreased the locomotion capacity of the animals during mid-adulthood. High glucose diets also induced ectopic apoptosis in the body of C. elegans, which is a potent mechanism that can explain the shortened lifespan and aging. Apoptotic cell corpses stained with SYTO 12 were found in the worms fed 10 mg/L of glucose. Mutation of core apoptotic regulatory genes, CED-3 and CED-4, inhibited the reduction of viability induced by high glucose diet, which indicates that these regulators were required for glucose-induced apoptosis or lifespan shortening. Thus, we conclude that high glucose diets have potential for inducing ectopic apoptosis in the body, resulting in a shortened lifespan accompanied with loss of locomotion capacity. High fat diet-induced obesity leads to proinflammatory response associated with higher expression of NOD2 protein Kim, Min-Soo;Choi, Myung-Sook;Han, Sung-Nim 219 Obesity has been reported to be associated with low grade inflammatory status. In this study, we investigated the inflammatory response as well as associated signaling molecules in immune cells from diet-induced obese mice. Four-week-old C57BL mice were fed diets containing 5% fat (control) or 20% fat and 1% cholesterol (HFD) for 24 weeks. Splenocytes ($1{\times}10^7$ cells) were stimulated with $10\;{\mu}g/mL$ of lipopolysaccharide (LPS) for 6 or 24 hrs. Production of interleukin (IL)-$1{\beta}$, IL-6, and TNF-${\alpha}$ as well as protein expression levels of nucleotide-binding oligomerization domain (NOD)2, signal transducer and activator of transcription (STAT)3, and pSTAT3 were determined. Mice fed HFD gained significantly more body weight compared to mice fed control diet ($28.2{\pm}0.6$ g in HFD and $15.4{\pm}0.8$ g in control). After stimulation with LPS for 6 hrs, production of IL-$1{\beta}$ was significantly higher (P=0.001) and production of tumor necrosis factor (TNF)-${\alpha}$ tended to be higher (P < 0.064) in the HFD group. After 24 hrs of LPS stimulation, splenocytes from the HFD group produced significantly higher levels of IL-6 ($10.02{\pm}0.66$ ng/mL in HFD and $7.33{\pm}0.56$ ng/mL in control, P=0.005) and IL-$1{\beta}$ ($121.34{\pm}12.72$ pg/mL in HFD and $49.74{\pm}6.58$ pg/mL in control, P < 0.001). There were no significant differences in the expression levels of STAT3 and pSTAT3 between the HFD and the control groups. However, the expression level of NOD2 protein as determined by Western blot analysis was 60% higher in the HFD group compared with the control group. NOD2 contributes to the induction of inflammation by activation of nuclear factor ${\kappa}B$. These findings suggest that diet-induced obesity is associated with increased inflammatory response of immune cells, and higher expression of NOD2 may contribute to these changes. Effect of processed foods on serum levels of eosinophil cationic protein among children with atopic dermatitis Lee, Ji-Min;Jin, Hyun-Jung;Noh, Geoun-Woong;Lee, Sang-Sun 224 The prevalence of atopic dermatitis (AD) in school-age children has increased in industrialized countries. As diet is one of the main factors provoking AD, some studies have suggested that food additives in processed foods could function as pseudoallergens, which comprise the non-immunoglobulin E-mediated reaction. Eosinophil cationic protein (ECP) is an eosinophil granule protein released during allergic reactions to food allergens in patients with AD. Thus, serum ECP levels may be a useful indicator of ongoing inflammatory processes in patients with AD. The purpose of this study was to investigate the effect of consuming MSG in processed foods on serum ECP levels among children with AD. This study was performed with 13 patients with AD (age, 7-11 years) who had a normal range of total IgE levels (< 300 IU/ml). All participants ate normal diets during the first week. Then, six patients were allocated to a processed food-restricted group (PRDG) and seven patients were in a general diet group (GDG). During the second week, children in the PRDG and their parents were asked to avoid eating all processed foods. On the third week, children in the PRDG were allowed all foods, as were the children in the GDG throughout the 3-week period. The subjects were asked to complete a dietary record during the trial period. Children with AD who received the dietary restriction showed decreased consumption of MSG and decreased serum ECP levels and an improved SCORing score on the atopic dermatitis index (P < 0.05). No differences in serum ECP levels or MSG consumption were observed in the GDG. Serum total IgE levels were not changed in either group. In conclusion, a reduction in MSG intake by restricting processed food consumption may lead to a decrease in serum ECP levels in children with AD and improve AD symptoms. Comparison of nutrient intake by sleep status in selected adults in Mysore, India Zadeh, Sara Sarrafi;Begum, Khyrunnisa 230 Insomnia has become a major public health issue in recent times. Although quality of sleep is affected by environmental, psychophysiological, and pharmacological factors, diet and nutrient intake also contribute to sleep problems. This study investigated the association between nutrient intake and co-morbid symptoms associated with sleep status among selected adults. Subjects in this study included 87 men and women aged 21-45 years. Presence of insomnia was assessed using the Insomnia Screening Questionnaire, and dietary intake was measured over three consecutive days by dietary survey. Descriptive analysis, ANOVA, and Chi-Square tests were performed to compute and interpret the data. Approximately 60% of the participants were insomniacs. People with insomnia consumed significantly lesser quantities of nutrients as compared to normal sleepers. Differences in intakes of energy, carbohydrates, folic acid, and $B_{12}$ were highly significant (P < 0.002). Further, intakes of protein, fat, and thiamine were significantly different (P < 0.021) between insomniacs and normal sleepers. The nutrient intake pattern of the insomniacs with co-morbid symptoms was quite different from that of the normal sleepers. Based on these results, it is probable that there is an association between nutrition deficiency, co-morbid symptoms, and sleep status. More studies are required to confirm these results. Correlation between attention deficit hyperactivity disorder and sugar consumption, quality of diet, and dietary behavior in school children Kim, Yu-Jeong;Chang, Hye-Ja 236 This study investigated the correlation between consumption of sugar intake by fifth grade students in primary schools and development of Attention Deficit Hyperactivity Disorder (ADHD). A total of 107 students participated, and eight boys and one girl (8.4% of the total) categorized as high risk for ADHD according to diagnostic criteria. There were significant differences in the occupations and drinking habits of the respondents' fathers between the normal group and risk group. In a comparison of students' nutrition intake status with daily nutrition intake standards for Koreans, students consumed twice as much protein as the recommended level, whereas their calcium intake was only 60% of the recommended DRI (dietary reference intake). Regarding intake volume of vitamin C, the normal group posted 143.9% of the recommended DRI, whereas the risk group showed only 65.5% of the recommended DRI. In terms of simple sugar intake from snacks, students in the normal group consumed 58.4 g while the risk group consumed 50.2 g. These levels constituted 12.5% of their total daily volume of sugar intake from snacks, which is higher than the 10% standard recommended by the WHO. In conclusion, children who consumed less sugar from fruit snacks or whose vitamin C intake was less than RI was at increased risks for ADHD (P < 0.05). However, no significant association was observed between total volume of simple sugar intake from snacks and ADHD development. External cross-validation of bioelectrical impedance analysis for the assessment of body composition in Korean adults Kim, Hyeoi-Jin;Kim, Chul-Hyun;Kim, Dong-Won;Park, Mi-Ra;Park, Hye-Soon;Min, Sun-Seek;Han, Seung-Ho;Yee, Jae-Yong;Chung, So-Chung;Kim, Chan 246 Bioelectrical impedance analysis (BIA) models must be validated against a reference method in a representative population sample before they can be accepted as accurate and applicable. The purpose of this study was to compare the eight-electrode BIA method with DEXA as a reference method in the assessment of body composition in Korean adults and to investigate the predictive accuracy and applicability of the eight-electrode BIA model. A total of 174 apparently healthy adults participated. The study was designed as a cross-sectional study. FM, %fat, and FFM were estimated by an eight-electrode BIA model and were measured by DEXA. Correlations between BIA_%fat and DEXA_%fat were 0.956 for men and 0.960 for women with a total error of 2.1%fat in men and 2.3%fat in women. The mean difference between BIA_%fat and DEXA_%fat was small but significant (P < 0.05), which resulted in an overestimation of $1.2{\pm}2.2$%fat (95% CI: -3.2-6.2%fat) in men and an underestimation of $-2.0{\pm}2.4$%fat (95% CI: -2.3-7.1%fat) in women. In the Bland-Altman analysis, the %fat of 86.3% of men was accurately estimated and the %fat of 66.0% of women was accurately estimated to within 3.5%fat. The BIA had good agreement for prediction of %fat in Korean adults. However, the eight-electrode BIA had small, but systemic, errors of %fat in the predictive accuracy for individual estimation. The total errors led to an overestimation of %fat in lean men and an underestimation of %fat in obese women. Survey of American food trends and the growing obesity epidemic Shao, Qin;Chin, Khew-Voon 253 The rapid rise in the incidence of obesity has emerged as one of the most pressing global public health issues in recent years. The underlying etiological causes of obesity, whether behavioral, environmental, genetic, or a combination of several of them, have not been completely elucidated. The obesity epidemic has been attributed to the ready availability, abundance, and overconsumption of high-energy content food. We determined here by Pearson's correlation the relationship between food type consumption and rising obesity using the loss-adjusted food availability data from the United States Department of Agriculture (USDA) Economic Research Services (ERS) as well as the obesity prevalence data from the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health and Nutrition Examination Survey (NHANES) at the Centers for Disease Control and Prevention (CDC). Our analysis showed that total calorie intake and consumption of high fructose com syrup (HFCS) did not correlate with rising obesity trends. Intake of other major food types, including chicken, dairy fats, salad and cooking oils, and cheese also did not correlate with obesity trends. However, our results surprisingly revealed that consumption of com products correlated with rising obesity and was independent of gender and race/ethnicity among population dynamics in the U.S. Therefore, we were able to demonstrate a novel link between the consumption of com products and rising obesity trends that has not been previously attributed to the obesity epidemic. This correlation coincides with the introduction of bioengineered corns into the human food chain, thus raising a new hypothesis that should be tested in molecular and animal models of obesity. Development and evaluation of a food frequency questionnaire for Vietnamese female immigrants in Korea: the Korean Genome and Epidemiology Study (KoGES) Kim, Sun-Hye;Choi, Ha-Ney;Hwang, Ji-Yun;Chang, Nam-Soo;Kim, Wha-Young;Chung, Hye-Won;Yang, Yoon-Jung 260 The objectives of this study were to develop a food-frequency questionnaire (FFQ) for Vietnamese female immigrants in Korea and to evaluate the validity of the FFQ. A total of 80 food items were selected in developing the FFQ according to consumption frequency, the contribution of energy and other nutrients, and the cooking methods based on one-day 24 hour recall (24HR) from 918 Vietnamese female immigrants between November 2006 and November 2007. The FFQ was validated by comparison with 24HR of 425 Vietnamese female immigrants between November 2008 and August 2009. The absolute nutrient intake calculated from the FFQ was higher than that estimated by 24HR for most nutrients. The correlation coefficients between 24HR and FFQ ranged from 0.10 (vitamin C) - 0.36 (energy) for crude intake, 0.05 (vitamin E) - 0.32 (calcium) for per 1000 kcal, and 0.08 (zinc) - 0.34 (calcium) for energy-adjusted, respectively. More than 70% of subjects were classified into the same or adjacent agreement groups for nutrients other than fiber, sodium, vitamin A, vitamin C, and vitamin E, while less than 10% of subjects were classified into complete disagreement groups. We conclude that the FFQ appears to be an acceptable tool for estimating nutrient intake and dietary patterns of Vietnamese female immigrants in Korea. Future studies to validate the FFQ using various biomarkers or other dietary assessment methods are needed. Nutritional intake of Korean population before and after adjusting for within-individual variations: 2001 Korean National Health and Nutrition Survey Data Kim, Dong-Woo;Shim, Jae-Eun;Paik, Hee-Young;Song, Won-O;Joung, Hyo-Jee 266 Accurate assessment of nutrient adequacy of a population should be based on usual intake distribution of that population. This study was conducted to adjust usual nutrient intake distributions of a single 24-hour recall in 2001 Korean National Health and Nutrition Surveys (KNHNS) in order to determine the magnitude of limitations inherent to a single 24-hour recall in assessing nutrient intakes of a population. Of 9,960 individuals who provided one 24-hour recall in 2001 KNHNS, 3,976 subjects provided an additional one-day 24-hour recall in 2002 Korean National Nutrition Survey by Season (KNNSS). To adjust for usual intake distribution, we estimated within-individual variations derived from 2001 KNHNS and 2002 KNNSS using the Iowa State University method. Nutritionally at risk population was assessed in reference to the Dietary Reference Intakes for Koreans (KDRIs). The Korean Estimated Average Requirement (Korean EAR) cut-point was applied to estimate the prevalence of inadequate nutrient intakes except for iron intakes, which were assessed using the probability approach. The estimated proportions below Korean EAR for calcium, riboflavin, and iron were 73%, 41%, and 24% from usual intake distribution and 70%, 51%, and 39% from one-day intake distribution, respectively. The estimated proportion of sodium intakes over the Intake Goal of 2,000 mg/day was 100% of the population after adjustment. The energy proportion from protein was within Korean Acceptable Macronutrient Distribution Ranges (Korean AMDR), whereas that of carbohydrate was higher than the upper limit and that of fat was below the lower limit in the subjects aged 30 years or older. According to these results, the prevalence of nutritional inadequacy and excess intake is over-estimated in Korea unless usual intake distributions are adjusted for one-day intakes of most nutrients.
CommonCrawl
•https://doi.org/10.1364/OE.416986 Compact and highly-efficient broadband surface grating antenna on a silicon platform Shahrzad Khajavi, Daniele Melati, Pavel Cheben, Jens H. Schmid, Qiankun Liu, Dan Xia Xu, and Winnie N. Ye Shahrzad Khajavi,1,* Daniele Melati,2 Pavel Cheben,3 Jens H. Schmid,3 Qiankun Liu,1 Dan Xia Xu,3 and Winnie N. Ye1 1Department of Electronics, Carleton University, 1125 Colonel By Drive, Ottawa, ON K1S 5B6, Canada 2Centre for Nanoscience and Nanotechnologies, CNRS, Université Paris-Saclay, 10 Bv. Thomas Gobert, 91120 Palaiseau, France 3Advanced Electronics and Photonics Research Center, National Research Council Canada, 1200 Montreal Road, Ottawa, ON K1A 0R6, Canada *Corresponding author: [email protected] Shahrzad Khajavi https://orcid.org/0000-0002-2480-4658 Daniele Melati https://orcid.org/0000-0002-3427-0186 Pavel Cheben https://orcid.org/0000-0003-4232-9130 Qiankun Liu https://orcid.org/0000-0002-2013-1310 Dan Xia Xu https://orcid.org/0000-0002-0490-5224 Winnie N. Ye https://orcid.org/0000-0003-2817-4531 S Khajavi D Melati P Cheben J Schmid Q Liu D Xu W Ye Shahrzad Khajavi, Daniele Melati, Pavel Cheben, Jens H. Schmid, Qiankun Liu, Dan Xia Xu, and Winnie N. Ye, "Compact and highly-efficient broadband surface grating antenna on a silicon platform," Opt. Express 29, 7003-7014 (2021) Single-etch subwavelength engineered fiber-chip grating couplers for 1.3 µm datacom... Daniel Benedikovic, et al. Sub-decibel silicon grating couplers based on L-shaped waveguides and engineered subwavelength... Compact high-performance adiabatic 3-dB coupler enabled by subwavelength grating slot in the... Luhua Xu, et al. Integrated Optics Coupling efficiency Diffraction efficiency Grating couplers Optical antennas Waveguide gratings Revised Manuscript: February 11, 2021 Manuscript Accepted: February 11, 2021 Design concept and methodology Simulation results and discussion Fabrication tolerance analysis We present a compact silicon-based surface grating antenna design with a high diffraction efficiency of 89% (-0.5 dB) and directionality of 0.94. The antenna is designed with subwavelength-based L-shaped radiating elements in a 300-nm silicon core, maintaining high efficiency with a compact footprint of 7.6 µm × 4.5 µm. The reflectivity remains below -10 dB over the S, C and L optical communication bands. A broad 1-dB bandwidth of 230 nm in diffraction efficiency is achieved with a central wavelength of 1550 nm. Optical antennas are fundamental elements to interface light between integrated photonic circuits, optical fibers or free-space ports [1–3]. High diffraction efficiency, low back-reflection, broadband operation, robustness to fabrication errors and compact footprint are important parameters for optical off-chip coupling [1]. Metal-based optical antennas using plasmonic resonances are very compact and suitable for applications in densely integrated optical phased arrays (OPAs) [4–6], but they commonly exhibit high losses which results in low radiation efficiency [7]. Antennas based on dielectric surface grating couplers are often used for fiber-chip coupling where high directionality and efficiency have been reported in literature [8–11]. High directionalities exceeding 0.95 have been achieved by breaking the device vertical symmetry [12,13]. However, in conventional 220-nm silicon-on-insulator (SOI) waveguides, grating couplers are usually 10-15 micrometers long for fiber-chip coupling to reach high efficiency. This is suitable for certain applications, e.g. fiber-chip coupling where the diffracted field needs to match the fiber mode size, but for applications requiring dense grating arrays – e.g. OPA [3], smaller antenna elements are desirable. Longer gratings used for fiber-chip coupling normally also come with a limited spectral bandwidth (typically around 40 nm at 1-dB). It has been shown that the optical bandwidth of the fiber-chip grating couplers can be increased without sacrificing coupling efficiency by using shorter, strongly diffracting gratings in combination with smaller mode-field diameter optical fibers [14]. An interesting solution to reduce the antenna dimension while maintaining a high efficiency is to use 300 nm silicon platform as waveguide core [13–15] for enhancing the grating strength. At the same time, lowering the back-reflection is an important goal in antenna design [11]. Back-reflections are typically reduced by off-vertical designs limiting the second-order diffraction effect [16]. Subwavelength grating (SWG) metamaterial engineering has also been successfully shown to reduce index mismatch at structural transitions, hence further reducing back-reflections [13,17–21]. In this paper, we demonstrate the design of a silicon grating antenna for off-chip emission, with unprecedented performance combining high diffraction efficiency, compact footprint, large feature size, low reflectivity and a broad wavelength range. This is achieved exploiting highly directional L-shaped diffractive elements [1] in a 300-nm-thick silicon-on-insulator (SOI) platform. The antenna has a footprint of 7.6 µm × 4.5 µm and achieves a diffraction efficiency of 0.89, a directionality of 0.94, and a wide 1-dB bandwidth of 230 nm. This unprecedented performance is achieved using subwavelength grating (SWG) metamaterial engineering, with a minimum feature size of 120 nm, which is compatible with deep-UV lithography [22,23]. The design optimization is performed through a genetic algorithm with 2D Finite-Difference Time-Domain (FDTD) simulations and an effective index model for the SWG metamaterial regions. Results are then validated by 3D FDTD simulations. The design methodology is described in section 2. The simulation results are presented in section 3, the tolerance analysis is discussed in section 4, and the conclusions are summarized in section 5. 2. Design concept and methodology The antenna comprises a surface grating in an SOI platform with a 300-nm-thick waveguide core, 1-μm buried oxide (BOX) and 2-μm silica cladding, as schematically shown in Fig. 1. The grating implements an L-shaped structure with a shallow etch of 150 nm. Compared to the standard 220 nm SOI platform, the choice of core thickness of 300 nm enhances the scattering strength in the design of the grating unit cell [24,25]. As a consequence, the total length of the grating (i.e. the number of periods) can be reduced while maintaining a high overall scattering efficiency. In addition, the L-shaped scatterers provide the blazing effect and improve the grating directionality, resulting in a high diffraction efficiency in the upward direction. However, the typical back-reflections from L-shaped grating structures are almost -10 dB (2D simulation) near the 1550 nm central wavelength [26]. Several techniques can be utilized to reduce back reflections such as using interleaved trenches [11] or applying low and high-index overlays [8,27,28]. A judiciously placed silicon segment has been reported to mitigate the back-reflection [1]; however, this would require comparatively small feature sizes. Here, we adopt a different strategy based on subwavelength grating (SWG) metamaterial segments [19] as it can yield larger feature sizes compared to other techniques for ease of fabrication. SWG metamaterials, since their early demonstrations in silicon waveguides [22,29–33], have became a fundamental tool to control the electromagnetic field distribution in integrated photonic devices [34]. An SWG was first proposed to increase the efficiency of surface grating couplers in [35] and since then, many types of SWG engineered surface couplers have been developed [13,17,18,35–47]. Fig. 1. (a) Three dimensional (3D) schematics of the antenna. (b) Two dimensional (2D) longitudinal cross-section of the structure with grating periods Λ1 and Λ2 and structural parameters [L1, L2, L3, L4, L5, L6, nSWG1, nSWG2]. (c) Top view of the SWG structures utilized in apodized (yellow) and periodic (green) cells, with SWG periods ΛSWG1 and ΛSWG2 and duty cycles DCSWG1 = w2/ ΛSWG1, DCSWG2 = w4/ Λswg2. The light propagates from the input waveguide along the x-axis. Here we utilize metamaterials as anti-reflection SWGs implemented between the periodic L-shaped segments. The full antenna structure is constructed with two types of unit cells. The first grating period is apodized in order to allow for design solutions with a smoother transition between the input waveguide and the grating for an overall increased diffraction efficiency. This first period is followed by a periodic section with constant grating strength. The dimensions of the first cell are L1, L2 and L3, where L1 refers to the length of the first SWG section, L2 the length of the shallow etched section, and L3 the length of the un-etched silicon slab (Fig. 1(b)). The SWG refractive index in the first unit cell is controlled by w1 and w2, which refer to the fully etched gap width and un-etched silicon regions, respectively. These parameters define the pitch (ΛSWG1 = w1 + w2) and duty cycle (DCSWG1 = w2/ΛSWG1) of the first SWG section (Fig. 1(c)). For the periodic main section of the grating, the parameters are L4, L5 and L6, defining the length of the SWG section, the shallow etched section and the un-etched silicon region, respectively. The parameters of the SWGs in the main section are w3 (gap width, full etched) and w4 (width of un-etched silicon), with the corresponding pitch (ΛSWG2 = w3 + w4) and duty cycle (DCSWG2 = w4/ΛSWG). We optimized the geometrical parameters of the device targeting the maximum upward diffraction efficiency for the fundamental TE mode at the central wavelength of 1550 nm. We utilized a genetic optimization algorithm [48,49] and 2D FDTD simulations [19]. To this purpose, the SWG sections are considered as homogeneous materials with refractive indices of n1 and n2, for the first unit cell and the periodic section, respectively, by means of an effective index model. The optimization design space is therefore defined as X = [L1, L2, L3, L4, L5, L6, n1, n2] for the six length parameters and two SWG refractive indices. The optimized parameter values were searched by a genetic optimizer. In order to ensure that only physically meaningful devices were evaluated, the values of Li were constrained to be larger than 40 nm, the period of both grating cells smaller than 1500 nm, while the SWG refractive index range was from 2 to 3. In the algorithm, the fitness is represented by the upward diffraction efficiency and the population size is chosen as 100 individuals. For each population, the fitness is evaluated via 2D FDTD simulations and the optimization is interrupted if the change in efficiency is less than 10−3 for 10 generations. Each simulation run takes approximately 2 minutes on a workstation with an 8-core 2.2 GHz CPU and 28 GB of RAM. In our case, the optimization converges after 16 generations. Finally, to validate our results, a full 3D FDTD simulation was performed on the final antenna design where the actual SWG structures were designed by mapping the equivalent refractive indices to pitch and duty cycle of two SWG sections [50,51]. 3. Simulation results and discussion In order to investigate the performance of antennas of different sizes, we consider different number of periods between 3 and 20 and optimize the antenna parameters for the highest efficiency and directionality. The grating directionality is calculated as: (1)$$\mathrm{\Gamma } = \frac{{{T_{up}}}}{{{T_{up}} + {T_{down}}}}$$ where Tup and Tdown is the power diffracted upward and downwards, respectively. The diffracted power is normalized to the input power, hence Tup is equal to the diffraction efficiency η. Figure 2 shows the diffraction efficiency at λ = 1550 nm for optimized antennas with varying number of grating periods. Based on these results, we choose 10 periods as a trade-off between antenna efficiency and directivity (η = 0.92, Г = 0.94) and compactness (grating length 7.6 μm). It is noted that the grating reflections into the input waveguide remains almost unchanged for the designs with 10 and 20 periods. Fig. 2. Diffraction efficiency at λ=1550 nm for optimized gratings with different number of periods. Using 10 periods for the antenna gratings, we found the following optimized values for the structural parameters: X = [289 nm, 217 nm, 209 nm, 283 nm, 272 nm, 210 nm, 2.64, 2.39]. As the next step, we performed wavelength scans for this optimized structure. Figure 3(a) shows the diffraction efficiency for a wavelength range from 1.4 μm to 1.7 μm. First, we performed 2D FDTD simulation (Fig. 3, dashed curves). A broad 1-dB bandwidth of 230 nm is achieved near the central wavelength of 1.55 μm. It is also observed that the S, C and L optical communication bands all fit within the 1-dB bandwidth of our antenna. In Fig. 3(b) we show the grating modal reflectivity, i.e. the fraction of back-reflected power that couples into the counter propagating TE mode of the input waveguide. A reflectivity of less than -13 dB is predicted over the S, C and L optical communication bands. Fig. 3. 2D and 3D FDTD simulation results for the (a) upward diffraction efficiency and (b) back-reflection, as a function of wavelength. The SWG sections are periodically structured assuming a full etch with a pitch ΛSWG1, ΛSWG2 = 400 nm and duty cycle of DCSWG1 = 0.7 for the SWG in the first (apodized) period and DCSWG2 = 0.59 for the periodic section. This yields a minimum feature size of 120 nm, ensuring the compatibility with deep UV lithography [22,23]. The total grating length (x-direction) is 7.6 μm. First, we used 3D FDTD simulations to evaluate the diffraction efficiency as a function of the antenna width for the optimized design with 10 periods without re-optimizing the structure for each width. As shown in Fig. 4, in the transverse direction (along y-axis), an antenna width of 4.5 μm guarantees a good trade-off between compactness and efficiency penalty. Fig. 4. Diffraction efficiency at λ=1550 nm for optimized antenna with 10 grating periods, as a function of waveguide width. Rigorous 3D FDTD simulations were also used to validate the spectral behavior of the antenna and results are reported on Fig. 3 (solid curves) that confirms a good agreement with 2D analysis. The efficiency at λ = 1550 nm obtained from 2D and 3D simulations is 0.92 and 0.89, respectively, while the reflectivity is -23 dB (2D) and -15.5 dB (3D). Reflectivity remains below -10 dB over the S, C and L optical communication bands. The ripples that appear in diffraction efficiency and reflection spectra are due to the abrupt transition at the end of the grating. An additional grating period acting as an anti-reflective layer could be inserted at the end of the grating to reduce reflection and mitigate the resulting interference pattern. Figure 5 shows the simulated electric field distribution (the real part of Ey). Less than 5% of the power is diffracted downwards and the residual power in the waveguide at the end of the grating is about 3%. Fig. 5. 3D FDTD simulation of electric field distribution of the optimized antenna. The real part of Ey is shown. Silicon substrate is also included in the simulation window. The far field radiation pattern of the antenna as a function of θ (polar angle) and ϕ (azimuthal angle) is shown in Fig. 6. The diffraction angle is 23° from the vertical and the full width at half maximum (FWHM) of far-field intensity along the polar and azimuth coordinate is 11° and 52° at 1550 nm, respectively. The radiation angle wavelength shift is 0.12°/nm, over 1.5 μm - 1.6 μm wavelength range with a variation of the diffraction efficiency of about 4% over the same range. Fig. 6. a) Antenna far-field radiation pattern at 1550 nm wavelength. b) Far-field distribution as a function of θ (for ϕ = 0) and c) along φ (for θ = 23°). (Far-field intensity distribution is normalized to the maximum far-field intensity). In our study we focus on the fundamental characteristics of the building block antenna component, such as the directional emission efficiency, broadband operation and small footprint. These antennas can be applied to many applications, including broadband fiber-chip couplers and optical phased arrays (OPAs). In particular, for the OPA application, the small footprint of our antenna makes it possible to build a high-density large aperture array, which can benefit from the high directionality of the antennas for increased efficiency. It is also important to minimize the separation between the array elements, for maximizing the steering range with a high radiation efficiency. In order to evaluate the performance of our antenna in a 2D array, we simulated a 10×10 array of gratings placed on a rectangular grid with a spacing of 8 μm in both the x and y directions. The far field of the array is calculated as the product of the antenna far field and the array factor function. The simulated far fields of the antenna and of the array are shown in Fig. 7(a) and Fig. 7(b), respectively, while the far field elevation cut (ϕ = 0) is shown in Fig. 7(c), and Fig. 7(d) shows the azimuthal cut (θ = 23°). It is observed that the grating lobes are separated by 11.2°, as expected from the array spacing of 8 μm. The calculated lobe-free steering range is θ × ϕ = 11.2° × 90°, which compares favorably with the state-of-the-art devices [52]. Furthermore, at the peak radiation angle of the array (θ = 23°), the efficiency is 89%. This is among the highest design efficiency reported for an antenna array in silicon waveguides. Fig. 7. Calculated far field of a) the single grating antenna, b) the 10×10 array of gratings with an 8 μm spacing in both x and y directions, the array far field c) elevation cut at ϕ = 0°, and d) azimuthal cut at θ = 23°. The design parameters for the antenna certainly differ for different applications. While for the fiber-chip couplers the grating antenna should match the size of the fiber aperture for efficient light coupling between the chip and the fiber, in the OPA applications the antenna size should be as small as possible to maximize the steering range [4]. The fiber-chip couplers are limited in bandwidth mainly because of the diffraction angle dependence with the wavelength which significantly impact the fiber coupling efficiency, with the fiber-chip diffraction angles typically in the range from 0° to 25° [1,12,13]. On the other hand, in the OPA applications, the angular width of diffracted field of the single antenna limits the maximum scanning range of the array. 4. Fabrication tolerance analysis An important aspect of our antenna design is the evaluation of the tolerance to fabrication errors. To investigate fabrication tolerances, we considered variations of structural parameters for three possible fabrication errors. We consider the effect of structural size variations in both the propagation and transverse directions. The first source of uncertainty is a possible mask misalignment between the shallow and deep etch patterns. In this case, the misalignment produces errors only in the direction of propagation. Figure 8(a) shows the schematic view of mask misalignment of δ < 0 resulting in an extra un-etched section of length δ between the shallow etch part of the L-shape and SWG segments as well as dimensional changes of L′3,6 = L3,6 + δ. Mask misalignment δ > 0 (Fig. 8(b)) results in the SWG segments being partially etched. To simulate this case, an additional shallow-etched section of length δ having the same SWG duty cycle and pitch of section L1 and L4 is placed between the shallow etch part of the L-shape and SWG segments, yielding L′3,6 = L3,6 + δ and L′1,2,4,5 = L1,2,4,5 – δ. Fig. 8. Schematic top view of structural size variations for the first cell of the structure; a) and b) mask misalignment; c) and d) under/over full etch, e) and f) under/over shallow etch. Another important fabrication error can arise from the dimensional errors in the fully etched features of the antenna, i.e. the holes in the SWG regions (Fig. 8(c)). To investigate the influence of this error, first we assume the effect of under-etching δ < 0, that is the SWG holes being smaller than the nominal value. In this case, L´1,4 = L1,4 + δ and two un-etched sections are added having identical lengths of δ/2 between each side of SWG section, reducing the hole size along the x-direction. In y-direction, the etched and un-etched parts of SWG sections are modified as: w′1,3 = w1,3 + δ, w′2,4 = w2,4 + δ. To simulate the effect of over–etching (SWG holes larger than the nominal value), two δ/2 length sections are added between each side of SWG region, while for the SWG region in proximity to the partial etched lengths of L-shape (L2, L5) it becomes partially etched (Fig. 8(d)). In this case δ > 0, the structural parameters are: (2)$$\begin{array}{c} L_{2,3,5,6}^{\prime } = {L_{2,3,5,6}} - {\delta / 2}\\ w_{1,3}^{\prime } = {w_{1,3}} + \delta \\ w_{2,4}^{\prime } = {w_{2,4}} - \delta . \end{array}$$ Finally, we study the effect of dimensional deviations of the shallow etched regions. For over etching δ > 0, L´2,5 = L2,5 + δ/2, L´1,3,4,6 = L1,3,4,6 − δ/2, i.e. shallow-etched section of length δ/2 is added to represent the over-etched part on the length of the shallow etch regions (Fig. 8(e)). In Fig. 8(f), similarly, the under-etching δ < 0 is estimated by adding a section with length of δ/2 between etch L-shape and SWG segments, yielding: (3)$$\begin{array}{c} L_2^{\prime } = {L_2} + \delta \\ L_{3,6}^{\prime } = {L_{3,6}} - {\delta / 2}\\ L_5^{\prime } = {L_5} + {\delta / {2.}} \end{array}$$ Figure 9 shows the upward diffraction efficiency and back reflections computed with 3D FDTD for different types of fabrication errors, considering dimensional variations (δ) between ±20 nm. As can be seen, the changes in diffraction efficiency are more pronounced over the ±20 nm range for the case of mask misalignment error, which is expected since this misalignment affects the dimensional aspect ratio between L-shape and SWG sections. The back reflection variations remain comparatively small for these fabrication errors (Fig. 9(b)). Fig. 9. a) Diffraction efficiency and b) back reflections for three different types of fabrication errors. Finally, we also considered the impact that a variation of the core thickness could have on the grating performance. This variability is expected to be much smaller than the fabrication variations related to the lithography considered above, with standard deviation on the order of 1 nm [53]. Assuming a silicon thickness of 300 ± 5 nm, while keeping the other grating dimensions unchanged, we obtained a negligible variation of <1% for both diffraction efficiency and reflection, compared to the nominal case. In this paper, we proposed a new type of silicon grating antenna based on SWG metamaterials on a 300-nm SOI platform, simultaneously achieving high diffraction efficiency and directionality, and broadband operation, with a compact device footprint. Our 3D FDTD simulations predict a peak diffraction efficiency of 0.89 (-0.5 dB) with a 1-dB bandwidth of 230 nm. Back-reflections as low as -15 dB are achieved. The compact antenna (7.6 μm × 4.5 μm) has a minimum feature size of 120 nm, which is compatible with UV lithography, with a robust tolerance to common nanofabrication errors. By leveraging subwavelength metamaterial engineering, this novel compact and highly efficient nanoantenna design opens a promising path towards development of a new library of integrated photonic components for applications in optical phased arrays and off-chip interconnects. National Research Council Canada ((HTSN 604), (CSTIP), (HTSN 209), New Ideation Program, Technology and Innovation Program); Canada Research Chairs; Natural Sciences and Engineering Research Council of Canada. The authors acknowledge Lei Yuan for assisting with the optical phased array simulations. 1. D. Melati, M. Kamandar Dezfouli, Y. Grinberg, J. H. Schmid, R. Cheriton, S. Janz, P. Cheben, and D. X. Xu, "Design of compact and efficient silicon photonic micro antennas with perfectly vertical emission," IEEE J. Sel. Top. Quantum Electron. 27(1), 1–10 (2021). [CrossRef] 2. T. Kim, P. Bhargava, C. V. Poulton, J. Notaros, A. Yaacobi, E. Timurdogan, C. Baiocco, N. Fahrenkopf, S. Kruger, T. Ngai, Y. Timalsina, M. R. Watts, and V. Stojanovic, "A single-chip optical phased array in a wafer-scale silicon photonics/CMOS 3D-integration platform," IEEE J. Solid-State Circuits 54(11), 3061–3074 (2019). [CrossRef] 3. J. L. Pita, I. Aldaya, P. Dainese, H. E. Hernandez-Figueroa, and L. H. Gabrielli, "Design of a compact CMOS-compatible photonic antenna by topological optimization," Opt. Express 26(3), 2435–2442 (2018). [CrossRef] 4. J. Sun, E. Timurdogan, A. Yaacobi, E. S. Hosseini, and M. R. Watts, "Large-scale nanophotonic phased array," Nature 493(7431), 195–199 (2013). [CrossRef] 5. R. Fatemi, A. Khachaturian, and A. Hajimiri, "A nonuniform sparse 2-D large-FOV optical phased array with a low-power PWM drive," IEEE J. Solid-State Circuits 54(5), 1200–1215 (2019). [CrossRef] 6. H. Abediasl and H. Hashemi, "Monolithic optical phased-array transceiver in a standard SOI CMOS process," Opt. Express 23(5), 6509–6519 (2015). [CrossRef] 7. L. Novotny, Optical Antennas: A New Technology That Can Enhance Light-Matter Interactions (Frontiers of Engineering, 2012). 8. D. Vermeulen, S. Selvaraja, P. Verheyen, G. Lepage, W. Bogaerts, P. Absil, D. Van Thourhout, and G. Roelkens, "High-efficiency fiber-to-chip grating couplers realized using an advanced CMOS-compatible Silicon-On-Insulator platform," Opt. Express 18(17), 18278–18283 (2010). [CrossRef] 9. D. Melati, Y. Grinberg, M. Kamandar Dezfouli, S. Janz, P. Cheben, J. H. Schmid, A. Sánchez-Postigo, and D. X. Xu, "Mapping the global design space of nanophotonic components using machine learning pattern recognition," Nat. Commun. 10(1), 1–9 (2019). [CrossRef] 10. T. Watanabe, M. Ayata, U. Koch, Y. Fedoryshyn, and J. Leuthold, "Perpendicular grating coupler based on a blazed antiback-reflection structure," J. Lightwave Technol. 35(21), 4663–4669 (2017). [CrossRef] 11. D. Benedikovic, C. Alonso-Ramos, P. Cheben, J. H. Schmid, S. Wang, D.-X. Xu, J. Lapointe, S. Janz, R. Halir, A. Ortega-Moñux, J. G. Wangüemert-Pérez, I. Molina-Fernández, J.-M. Fédéli, L. Vivien, and M. Dado, "High-directionality fiber-chip grating coupler with interleaved trenches and subwavelength index-matching structure," Opt. Lett. 40(18), 4190–4193 (2015). [CrossRef] 12. C. Alonso-Ramos, P. Cheben, A. Ortega-Moñux, J. H. Schmid, D.-X. Xu, and I. Molina-Fernández, "Fiber-chip grating coupler based on interleaved trenches with directionality exceeding 95%," Opt. Lett. 39(18), 5351–5354 (2014). [CrossRef] 13. D. Benedikovic, C. Alonso-Ramos, S. Guerber, X. Le Roux, P. Cheben, C. Dupré, B. Szelag, D. Fowler, É. Cassan, D. Marris-Morini, C. Baudot, F. Boeuf, and L. Vivien, "Sub-decibel silicon grating couplers based on L-shaped waveguides and engineered subwavelength metamaterials," Opt. Express 27(18), 26239–26250 (2019). [CrossRef] 14. M. Passoni, D. Gerace, L. Carroll, and L. C. Andreani, "Grating couplers in silicon-on-insulator: The role of photonic guided resonances on lineshape and bandwidth," Appl. Phys. Lett. 110(4), 41107–41111 (2017). [CrossRef] 15. D. Benedikovic, P. Cheben, J. H. Schmid, D. Xu, J. Lapointe, S. Wang, R. Halir, A. Ortega-Moñux, S. Janz, and M. Dado, "High-efficiency single etch step apodized surface grating coupler using subwavelength structure," Laser Photonics Rev. 8(6), L93–L97 (2014). [CrossRef] 16. D. Taillaert, P. Bienstman, and R. Baets, "Compact efficient broadband grating coupler for silicon-on-insulator waveguides," Opt. Lett. 29(23), 2749–2751 (2004). [CrossRef] 17. W. Zhou, Z. Cheng, X. Chen, K. Xu, X. Sun, and H. Tsang, "Subwavelength engineering in silicon photonic devices," IEEE J. Sel. Top. Quantum Electron. 25(3), 1–13 (2019). [CrossRef] 18. D. Benedikovic, C. Alonso-Ramos, D. Pérez-Galacho, S. Guerber, V. Vakarin, G. Marcaud, X. Le Roux, E. Cassan, D. Marris-Morini, P. Cheben, F. Boeuf, C. Baudot, and L. Vivien, "L-shaped fiber-chip grating couplers with high directionality and low reflectivity fabricated with deep-UV lithography," Opt. Lett. 42(17), 3439–3442 (2017). [CrossRef] 19. M. Kamandar Dezfouli, Y. Grinberg, D. Melati, P. Cheben, J. Schmid, A. Sánchez-Postigo, A. Ortega-Moñux, J. G. Wangüemert-Pérez, R. Cheriton, S. Janz, and D.-X. Xu, "Perfectly vertical surface grating couplers using subwavelength engineering for increased feature sizes," Opt. Lett. 45(13), 3701–3704 (2020). [CrossRef] 20. R. Halir, A. Ortega-Moñux, D. Benedikovic, G. Z. Mashanovich, J. G. Wangüemert-Pérez, J. H. Schmid, Í. Molina-Fernández, and P. Cheben, "Subwavelength-grating metamaterial structures for silicon photonic devices," Proc. IEEE 106(12), 2144–2157 (2018). [CrossRef] 21. R. Halir, P. Cheben, J. H. Schmid, R. Ma, D. Bedard, S. Janz, D.-X. Xu, A. Densmore, J. Lapointe, and Í. Molina-Fernández, "Continuously apodized fiber-to-chip surface grating coupler with refractive index engineered subwavelength structure," Opt. Lett. 35(19), 3243–3245 (2010). [CrossRef] 22. J. H. Schmid, P. Cheben, P. J. Bock, R. Halir, J. Lapointe, S. Janz, A. Delage, A. Densmore, J.-M. Fedeli, T. J. Hall, B. Lamontagne, R. Ma, I. Molina-Fernandez, and D.-X. Xu, "Refractive Index Engineering With Subwavelength Gratings in Silicon Microphotonic Waveguides," IEEE Photonics J. 3(3), 597–607 (2011). [CrossRef] 23. J. D. Sarmiento-Merenguel, A. Ortega-Moñux, J.-M. Fédéli, J. G. Wangüemert-Pérez, C. Alonso-Ramos, E. Durán-Valdeiglesias, P. Cheben, Í. Molina-Fernández, and R. Halir, "Controlling leakage losses in subwavelength grating silicon metamaterial waveguides," Opt. Lett. 41(15), 3443–3446 (2016). [CrossRef] 24. D. X. Xu, J. H. Schmid, G. T. Reed, G. Z. Mashanovich, D. J. Thomson, M. Nedeljkovic, X. Chen, D. Van Thourhout, S. Keyvaninia, and S. K. Selvaraja, "Silicon photonic integration platform-Have we found the sweet spot?" IEEE J. Sel. Top. Quantum Electron. 20(4), 189–205 (2014). [CrossRef] 25. A. Bozzola, L. Carroll, D. Gerace, I. Cristiani, and L. C. Andreani, "Optimising apodized grating couplers in a pure SOI platform to −0.5 dB coupling efficiency," Opt. Express 23(12), 16289–16304 (2015). [CrossRef] 26. S. Khajavi, D. Melati, P. Cheben, J. H. Schmid, D.-X. Xu, S. Janz, and W. N Ye, "Design of compact silicon antennas based on high directionality gratings," in IEEE Photonics Conference (IPC, 2020), pp. 1–2. 27. H. Y. Chen and K. C. Yang, "Design of a high-efficiency grating coupler based on a silicon nitride overlay for silicon-on-insulator waveguides," Appl. Opt. 49(33), 6455–6462 (2010). [CrossRef] 28. S. Yang, Y. Zhang, T. Baehr-Jones, and M. Hochberg, "High efficiency germanium-assisted grating coupler," Opt. Express 22(25), 30607–30612 (2014). [CrossRef] 29. J. H. Schmid, P. Cheben, S. Janz, J. Lapointe, E. Post, and D.-X. Xu, "Gradient-index antireflective subwavelength structures for planar waveguide facets," Opt. Lett. 32(13), 1794–1796 (2007). [CrossRef] 30. P. Cheben, D.-X. Xu, S. Janz, and A. Densmore, "Subwavelength waveguide grating for mode conversion and light coupling in integrated optics," Opt. Express 14(11), 4695–4702 (2006). [CrossRef] 31. J. H. Schmid, P. Cheben, S. Janz, J. Lapointe, E. Post, A. Delâge, A. Densmore, B. Lamontagne, P. Waldron, and D.-X. Xu, "Subwavelength grating structures in planar waveguide facets for modified reflectivity," Proc. SPIE 6796, 67963E1–67963E10 (2007). [CrossRef] 32. P. J. Bock, P. Cheben, J. H. Schmid, A. Delâge, D.-X. Xu, S. Janz, and T. J. Hall, "Sub-wavelength grating mode transformers in silicon slab waveguides," Opt. Express 17(21), 19120–19133 (2009). [CrossRef] 33. P. Cheben, J. H. Schmid, D.-X. Xu, A. Densmore, and S. Janz, "Composite subwavelength-structured waveguide in optical systems," U.S. patent 8,503,839B2 (August 6, 2013). 34. P. Cheben, R. Halir, J. H. Schmid, H. A. Atwater, and D. R. Smith, "Subwavelength integrated photonics," Nature 560(7720), 565–572 (2018). [CrossRef] 35. P. Cheben, S. Janz, D. X. Xu, B. Lamontagne, A. Delâge, and S. Tanev, "A broad-band waveguide grating coupler with a subwavelength grating mirror," IEEE Photonics Technol. Lett. 18(1), 13–15 (2006). [CrossRef] 36. Q. Zhong, V. Veerasubramanian, Y. Wang, W. Shi, D. Patel, S. Ghosh, A. Samani, L. Chrostowski, R. Bojko, and D. V. Plant, "Focusing-curved subwavelength grating couplers for ultra-broadband silicon photonics optical interfaces," Opt. Express 22(15), 18224–18231 (2014). [CrossRef] 37. A. Sánchez-Postigo, J. Gonzalo Wangüemert-Pérez, J. M. Luque-González, Í. Molina-Fernández, P. Cheben, C. A. Alonso-Ramos, R. Halir, J. H. Schmid, and A. Ortega-Moñux, "Broadband fiber-chip zero-order surface grating coupler with 0.4 dB efficiency," Opt. Lett. 41(13), 3013–3016 (2016). [CrossRef] 38. Y. Wang, W. Shi, X. Wang, Z. Lu, M. Caverley, R. Bojko, L. Chrostowski, and N. A. F. Jaeger, "Design of broadband subwavelength grating couplers with low back reflection," Opt. Lett. 40(20), 4647–4650 (2015). [CrossRef] 39. J. Zou, Y. Yu, and X. Zhang, "Single step etched two dimensional grating coupler based on the SOI platform," Opt. Express 23(25), 32490–32495 (2015). [CrossRef] 40. J. Zou, Y. Yu, and X. Zhang, "Two-dimensional grating coupler with a low polarization dependent loss of 0.25 dB covering the C-band," Opt. Lett. 41(18), 4206–4209 (2016). [CrossRef] 41. J. Kang, Z. Cheng, W. Zhou, T.-H. Xiao, K.-L. Gopalakrisna, M. Takenaka, H. K. Tsang, and K. Goda, "Focusing subwavelength grating coupler for mid-infrared suspended membrane germanium waveguides," Opt. Lett. 42(11), 2094–2097 (2017). [CrossRef] 42. Y. Wang, X. Wang, J. Flueckiger, H. Yun, W. Shi, R. Bojko, N. A. F. Jaeger, and L. Chrostowski, "Focusing sub-wavelength grating couplers with low back reflections for rapid prototyping of silicon photonic circuits," Opt. Express 22(17), 20652–20662 (2014). [CrossRef] 43. X. Chen and H. K. Tsang, "Nanoholes grating couplers for coupling between silicon-on-insulator waveguides and optical fibers," IEEE Photonics J. 1(3), 184–190 (2009). [CrossRef] 44. X. Xu, H. Subbaraman, D. Kwong, J. Covey, A. Hosseini, and R. T. Chen, "Colorless grating couplers realized by interleaving dispersion engineered subwavelength structures," in Conference on Lasers and Electro-Optics (CLEO, 2013), pp. 3588–3591. 45. L. Carroll, D. Gerace, I. Cristiani, and L. C. Andreani, "Optimizing polarization-diversity couplers for Si-photonics: reaching the −1 dB coupling efficiency threshold," Opt. Express 22(12), 14769–14781 (2014). [CrossRef] 46. D. Benedikovic, C. Alonso-Ramos, P. Cheben, J. H. Schmid, S. Wang, R. Halir, A. Ortega-Moñux, D.-X. Xu, L. Vivien, J. Lapointe, S. Janz, and M. Dado, "Single-etch subwavelength engineered fiber-chip grating couplers for 13 µm datacom wavelength band," Opt. Express 24(12), 12893–12904 (2016). [CrossRef] 47. X. Chen, D. J. Thomson, L. Crudginton, A. Z. Khokhar, and G. T. Reed, "Dual-etch apodised grating couplers for efficient fibre-chip coupling near 1310 nm wavelength," Opt. Express 25(15), 17864–17871 (2017). [CrossRef] 48. Y. Tong, W. Zhou, and H. Ki Tsang, "Efficient perfectly vertical grating coupler for multi-core fibers fabricated with 193 nm DUV lithography," Opt. Lett. 43(23), 5709–5712 (2018). [CrossRef] 49. C. R. Houck, J. Joines, and M. G. Kay, "A genetic algorithm for function optimization: a Matlab implementation," Ncsu-ie tr95(09), 1–10 (1995). 50. P. J. Bock, P. Cheben, J. H. Schmid, J. Lapointe, A. Delâge, S. Janz, G. C. Aers, D.-X. Xu, A. Densmore, and T. J. Hall, "Subwavelength grating periodic structures in silicon-on-insulator: a new type of microphotonic waveguide," Opt. Express 18(19), 20251–20262 (2010). [CrossRef] 51. P. Cheben, J. H. Schmid, R. Halir, A. Sánchez-Postigo, D. X. Xu, S. Janz, J. Lapointe, S. Wang, M. Vachon, A. Ortega-Moñux, G. Wangüemert-Pérez, I. Molina-Fernández, J. M. Luque-Gonzalez, J. D. Sarmiento-Merenguel, J. Pond, D. Benedikovic, C. Alonso-Ramos, M. Dado, J. Müllerová, M. Pánes, and V. Vasinek, "Subwavelength index engineered waveguides and devices," in Optical Fiber Communication Conference (OFC, 2017), paper Tu3K-2. 52. M. J. R. Heck, "Highly integrated optical phased arrays: Photonic integrated circuits for optical beam shaping and beam steering," Nanophotonics 6(1), 93–107 (2017). [CrossRef] 53. Y. Xing, M. Wang, A. Ruocco, J. Geessels, U. Khan, and W. Bogaerts, "Compact silicon photonics circuit to extract multiple parameters for process control monitoring," OSA Continuum 3(2), 379–390 (2020). [CrossRef] D. Melati, M. Kamandar Dezfouli, Y. Grinberg, J. H. Schmid, R. Cheriton, S. Janz, P. Cheben, and D. X. Xu, "Design of compact and efficient silicon photonic micro antennas with perfectly vertical emission," IEEE J. Sel. Top. Quantum Electron. 27(1), 1–10 (2021). T. Kim, P. Bhargava, C. V. Poulton, J. Notaros, A. Yaacobi, E. Timurdogan, C. Baiocco, N. Fahrenkopf, S. Kruger, T. Ngai, Y. Timalsina, M. R. Watts, and V. Stojanovic, "A single-chip optical phased array in a wafer-scale silicon photonics/CMOS 3D-integration platform," IEEE J. Solid-State Circuits 54(11), 3061–3074 (2019). J. L. Pita, I. Aldaya, P. Dainese, H. E. Hernandez-Figueroa, and L. H. Gabrielli, "Design of a compact CMOS-compatible photonic antenna by topological optimization," Opt. Express 26(3), 2435–2442 (2018). J. Sun, E. Timurdogan, A. Yaacobi, E. S. Hosseini, and M. R. Watts, "Large-scale nanophotonic phased array," Nature 493(7431), 195–199 (2013). R. Fatemi, A. Khachaturian, and A. Hajimiri, "A nonuniform sparse 2-D large-FOV optical phased array with a low-power PWM drive," IEEE J. Solid-State Circuits 54(5), 1200–1215 (2019). H. Abediasl and H. Hashemi, "Monolithic optical phased-array transceiver in a standard SOI CMOS process," Opt. Express 23(5), 6509–6519 (2015). L. Novotny, Optical Antennas: A New Technology That Can Enhance Light-Matter Interactions (Frontiers of Engineering, 2012). D. Vermeulen, S. Selvaraja, P. Verheyen, G. Lepage, W. Bogaerts, P. Absil, D. Van Thourhout, and G. Roelkens, "High-efficiency fiber-to-chip grating couplers realized using an advanced CMOS-compatible Silicon-On-Insulator platform," Opt. Express 18(17), 18278–18283 (2010). D. Melati, Y. Grinberg, M. Kamandar Dezfouli, S. Janz, P. Cheben, J. H. Schmid, A. Sánchez-Postigo, and D. X. Xu, "Mapping the global design space of nanophotonic components using machine learning pattern recognition," Nat. Commun. 10(1), 1–9 (2019). T. Watanabe, M. Ayata, U. Koch, Y. Fedoryshyn, and J. Leuthold, "Perpendicular grating coupler based on a blazed antiback-reflection structure," J. Lightwave Technol. 35(21), 4663–4669 (2017). D. Benedikovic, C. Alonso-Ramos, P. Cheben, J. H. Schmid, S. Wang, D.-X. Xu, J. Lapointe, S. Janz, R. Halir, A. Ortega-Moñux, J. G. Wangüemert-Pérez, I. Molina-Fernández, J.-M. Fédéli, L. Vivien, and M. Dado, "High-directionality fiber-chip grating coupler with interleaved trenches and subwavelength index-matching structure," Opt. Lett. 40(18), 4190–4193 (2015). C. Alonso-Ramos, P. Cheben, A. Ortega-Moñux, J. H. Schmid, D.-X. Xu, and I. Molina-Fernández, "Fiber-chip grating coupler based on interleaved trenches with directionality exceeding 95%," Opt. Lett. 39(18), 5351–5354 (2014). D. Benedikovic, C. Alonso-Ramos, S. Guerber, X. Le Roux, P. Cheben, C. Dupré, B. Szelag, D. Fowler, É. Cassan, D. Marris-Morini, C. Baudot, F. Boeuf, and L. Vivien, "Sub-decibel silicon grating couplers based on L-shaped waveguides and engineered subwavelength metamaterials," Opt. Express 27(18), 26239–26250 (2019). M. Passoni, D. Gerace, L. Carroll, and L. C. Andreani, "Grating couplers in silicon-on-insulator: The role of photonic guided resonances on lineshape and bandwidth," Appl. Phys. Lett. 110(4), 41107–41111 (2017). D. Benedikovic, P. Cheben, J. H. Schmid, D. Xu, J. Lapointe, S. Wang, R. Halir, A. Ortega-Moñux, S. Janz, and M. Dado, "High-efficiency single etch step apodized surface grating coupler using subwavelength structure," Laser Photonics Rev. 8(6), L93–L97 (2014). D. Taillaert, P. Bienstman, and R. Baets, "Compact efficient broadband grating coupler for silicon-on-insulator waveguides," Opt. Lett. 29(23), 2749–2751 (2004). W. Zhou, Z. Cheng, X. Chen, K. Xu, X. Sun, and H. Tsang, "Subwavelength engineering in silicon photonic devices," IEEE J. Sel. Top. Quantum Electron. 25(3), 1–13 (2019). D. Benedikovic, C. Alonso-Ramos, D. Pérez-Galacho, S. Guerber, V. Vakarin, G. Marcaud, X. Le Roux, E. Cassan, D. Marris-Morini, P. Cheben, F. Boeuf, C. Baudot, and L. Vivien, "L-shaped fiber-chip grating couplers with high directionality and low reflectivity fabricated with deep-UV lithography," Opt. Lett. 42(17), 3439–3442 (2017). M. Kamandar Dezfouli, Y. Grinberg, D. Melati, P. Cheben, J. Schmid, A. Sánchez-Postigo, A. Ortega-Moñux, J. G. Wangüemert-Pérez, R. Cheriton, S. Janz, and D.-X. Xu, "Perfectly vertical surface grating couplers using subwavelength engineering for increased feature sizes," Opt. Lett. 45(13), 3701–3704 (2020). R. Halir, A. Ortega-Moñux, D. Benedikovic, G. Z. Mashanovich, J. G. Wangüemert-Pérez, J. H. Schmid, Í. Molina-Fernández, and P. Cheben, "Subwavelength-grating metamaterial structures for silicon photonic devices," Proc. IEEE 106(12), 2144–2157 (2018). R. Halir, P. Cheben, J. H. Schmid, R. Ma, D. Bedard, S. Janz, D.-X. Xu, A. Densmore, J. Lapointe, and Í. Molina-Fernández, "Continuously apodized fiber-to-chip surface grating coupler with refractive index engineered subwavelength structure," Opt. Lett. 35(19), 3243–3245 (2010). J. H. Schmid, P. Cheben, P. J. Bock, R. Halir, J. Lapointe, S. Janz, A. Delage, A. Densmore, J.-M. Fedeli, T. J. Hall, B. Lamontagne, R. Ma, I. Molina-Fernandez, and D.-X. Xu, "Refractive Index Engineering With Subwavelength Gratings in Silicon Microphotonic Waveguides," IEEE Photonics J. 3(3), 597–607 (2011). J. D. Sarmiento-Merenguel, A. Ortega-Moñux, J.-M. Fédéli, J. G. Wangüemert-Pérez, C. Alonso-Ramos, E. Durán-Valdeiglesias, P. Cheben, Í. Molina-Fernández, and R. Halir, "Controlling leakage losses in subwavelength grating silicon metamaterial waveguides," Opt. Lett. 41(15), 3443–3446 (2016). D. X. Xu, J. H. Schmid, G. T. Reed, G. Z. Mashanovich, D. J. Thomson, M. Nedeljkovic, X. Chen, D. Van Thourhout, S. Keyvaninia, and S. K. Selvaraja, "Silicon photonic integration platform-Have we found the sweet spot?" IEEE J. Sel. Top. Quantum Electron. 20(4), 189–205 (2014). A. Bozzola, L. Carroll, D. Gerace, I. Cristiani, and L. C. Andreani, "Optimising apodized grating couplers in a pure SOI platform to −0.5 dB coupling efficiency," Opt. Express 23(12), 16289–16304 (2015). S. Khajavi, D. Melati, P. Cheben, J. H. Schmid, D.-X. Xu, S. Janz, and W. N Ye, "Design of compact silicon antennas based on high directionality gratings," in IEEE Photonics Conference (IPC, 2020), pp. 1–2. H. Y. Chen and K. C. Yang, "Design of a high-efficiency grating coupler based on a silicon nitride overlay for silicon-on-insulator waveguides," Appl. Opt. 49(33), 6455–6462 (2010). S. Yang, Y. Zhang, T. Baehr-Jones, and M. Hochberg, "High efficiency germanium-assisted grating coupler," Opt. Express 22(25), 30607–30612 (2014). J. H. Schmid, P. Cheben, S. Janz, J. Lapointe, E. Post, and D.-X. Xu, "Gradient-index antireflective subwavelength structures for planar waveguide facets," Opt. Lett. 32(13), 1794–1796 (2007). P. Cheben, D.-X. Xu, S. Janz, and A. Densmore, "Subwavelength waveguide grating for mode conversion and light coupling in integrated optics," Opt. Express 14(11), 4695–4702 (2006). J. H. Schmid, P. Cheben, S. Janz, J. Lapointe, E. Post, A. Delâge, A. Densmore, B. Lamontagne, P. Waldron, and D.-X. Xu, "Subwavelength grating structures in planar waveguide facets for modified reflectivity," Proc. SPIE 6796, 67963E1–67963E10 (2007). P. J. Bock, P. Cheben, J. H. Schmid, A. Delâge, D.-X. Xu, S. Janz, and T. J. Hall, "Sub-wavelength grating mode transformers in silicon slab waveguides," Opt. Express 17(21), 19120–19133 (2009). P. Cheben, J. H. Schmid, D.-X. Xu, A. Densmore, and S. Janz, "Composite subwavelength-structured waveguide in optical systems," U.S. patent 8,503,839B2 (August 6, 2013). P. Cheben, R. Halir, J. H. Schmid, H. A. Atwater, and D. R. Smith, "Subwavelength integrated photonics," Nature 560(7720), 565–572 (2018). P. Cheben, S. Janz, D. X. Xu, B. Lamontagne, A. Delâge, and S. Tanev, "A broad-band waveguide grating coupler with a subwavelength grating mirror," IEEE Photonics Technol. Lett. 18(1), 13–15 (2006). Q. Zhong, V. Veerasubramanian, Y. Wang, W. Shi, D. Patel, S. Ghosh, A. Samani, L. Chrostowski, R. Bojko, and D. V. Plant, "Focusing-curved subwavelength grating couplers for ultra-broadband silicon photonics optical interfaces," Opt. Express 22(15), 18224–18231 (2014). A. Sánchez-Postigo, J. Gonzalo Wangüemert-Pérez, J. M. Luque-González, Í. Molina-Fernández, P. Cheben, C. A. Alonso-Ramos, R. Halir, J. H. Schmid, and A. Ortega-Moñux, "Broadband fiber-chip zero-order surface grating coupler with 0.4 dB efficiency," Opt. Lett. 41(13), 3013–3016 (2016). Y. Wang, W. Shi, X. Wang, Z. Lu, M. Caverley, R. Bojko, L. Chrostowski, and N. A. F. Jaeger, "Design of broadband subwavelength grating couplers with low back reflection," Opt. Lett. 40(20), 4647–4650 (2015). J. Zou, Y. Yu, and X. Zhang, "Single step etched two dimensional grating coupler based on the SOI platform," Opt. Express 23(25), 32490–32495 (2015). J. Zou, Y. Yu, and X. Zhang, "Two-dimensional grating coupler with a low polarization dependent loss of 0.25 dB covering the C-band," Opt. Lett. 41(18), 4206–4209 (2016). J. Kang, Z. Cheng, W. Zhou, T.-H. Xiao, K.-L. Gopalakrisna, M. Takenaka, H. K. Tsang, and K. Goda, "Focusing subwavelength grating coupler for mid-infrared suspended membrane germanium waveguides," Opt. Lett. 42(11), 2094–2097 (2017). Y. Wang, X. Wang, J. Flueckiger, H. Yun, W. Shi, R. Bojko, N. A. F. Jaeger, and L. Chrostowski, "Focusing sub-wavelength grating couplers with low back reflections for rapid prototyping of silicon photonic circuits," Opt. Express 22(17), 20652–20662 (2014). X. Chen and H. K. Tsang, "Nanoholes grating couplers for coupling between silicon-on-insulator waveguides and optical fibers," IEEE Photonics J. 1(3), 184–190 (2009). X. Xu, H. Subbaraman, D. Kwong, J. Covey, A. Hosseini, and R. T. Chen, "Colorless grating couplers realized by interleaving dispersion engineered subwavelength structures," in Conference on Lasers and Electro-Optics (CLEO, 2013), pp. 3588–3591. L. Carroll, D. Gerace, I. Cristiani, and L. C. Andreani, "Optimizing polarization-diversity couplers for Si-photonics: reaching the −1 dB coupling efficiency threshold," Opt. Express 22(12), 14769–14781 (2014). D. Benedikovic, C. Alonso-Ramos, P. Cheben, J. H. Schmid, S. Wang, R. Halir, A. Ortega-Moñux, D.-X. Xu, L. Vivien, J. Lapointe, S. Janz, and M. Dado, "Single-etch subwavelength engineered fiber-chip grating couplers for 13 µm datacom wavelength band," Opt. Express 24(12), 12893–12904 (2016). X. Chen, D. J. Thomson, L. Crudginton, A. Z. Khokhar, and G. T. Reed, "Dual-etch apodised grating couplers for efficient fibre-chip coupling near 1310 nm wavelength," Opt. Express 25(15), 17864–17871 (2017). Y. Tong, W. Zhou, and H. Ki Tsang, "Efficient perfectly vertical grating coupler for multi-core fibers fabricated with 193 nm DUV lithography," Opt. Lett. 43(23), 5709–5712 (2018). C. R. Houck, J. Joines, and M. G. Kay, "A genetic algorithm for function optimization: a Matlab implementation," Ncsu-ie tr95(09), 1–10 (1995). P. J. Bock, P. Cheben, J. H. Schmid, J. Lapointe, A. Delâge, S. Janz, G. C. Aers, D.-X. Xu, A. Densmore, and T. J. Hall, "Subwavelength grating periodic structures in silicon-on-insulator: a new type of microphotonic waveguide," Opt. Express 18(19), 20251–20262 (2010). P. Cheben, J. H. Schmid, R. Halir, A. Sánchez-Postigo, D. X. Xu, S. Janz, J. Lapointe, S. Wang, M. Vachon, A. Ortega-Moñux, G. Wangüemert-Pérez, I. Molina-Fernández, J. M. Luque-Gonzalez, J. D. Sarmiento-Merenguel, J. Pond, D. Benedikovic, C. Alonso-Ramos, M. Dado, J. Müllerová, M. Pánes, and V. Vasinek, "Subwavelength index engineered waveguides and devices," in Optical Fiber Communication Conference (OFC, 2017), paper Tu3K-2. M. J. R. Heck, "Highly integrated optical phased arrays: Photonic integrated circuits for optical beam shaping and beam steering," Nanophotonics 6(1), 93–107 (2017). Y. Xing, M. Wang, A. Ruocco, J. Geessels, U. Khan, and W. Bogaerts, "Compact silicon photonics circuit to extract multiple parameters for process control monitoring," OSA Continuum 3(2), 379–390 (2020). Abediasl, H. Absil, P. Aers, G. C. Aldaya, I. Alonso-Ramos, C. Alonso-Ramos, C. A. Andreani, L. C. Ayata, M. Baehr-Jones, T. Baets, R. Baiocco, C. Baudot, C. Bedard, D. Benedikovic, D. Bhargava, P. Bienstman, P. Bock, P. J. Boeuf, F. Bogaerts, W. Bojko, R. Bozzola, A. Carroll, L. Cassan, E. Cassan, É. Caverley, M. Cheben, P. Chen, H. Y. Chen, R. T. Chen, X. Cheng, Z. Cheriton, R. Chrostowski, L. Covey, J. Cristiani, I. Crudginton, L. Dado, M. Dainese, P. Delage, A. Delâge, A. Densmore, A. Dupré, C. Durán-Valdeiglesias, E. Fahrenkopf, N. Fatemi, R. Fedeli, J.-M. Fédéli, J.-M. Fedoryshyn, Y. Flueckiger, J. Fowler, D. Gabrielli, L. H. Geessels, J. Gerace, D. Ghosh, S. Goda, K. Gonzalo Wangüemert-Pérez, J. Gopalakrisna, K.-L. Grinberg, Y. Guerber, S. Hajimiri, A. Halir, R. Hall, T. J. Hashemi, H. Heck, M. J. R. Hernandez-Figueroa, H. E. Hochberg, M. Hosseini, A. Hosseini, E. S. Houck, C. R. Jaeger, N. A. F. Janz, S. Joines, J. Kamandar Dezfouli, M. Kang, J. Kay, M. G. Keyvaninia, S. Khachaturian, A. Khajavi, S. Khan, U. Khokhar, A. Z. Ki Tsang, H. Kim, T. Koch, U. Kruger, S. Kwong, D. Lamontagne, B. Lapointe, J. Le Roux, X. Lepage, G. Leuthold, J. Lu, Z. Luque-Gonzalez, J. M. Luque-González, J. M. Ma, R. Marcaud, G. Marris-Morini, D. Mashanovich, G. Z. Melati, D. Molina-Fernandez, I. Molina-Fernández, I. Molina-Fernández, Í. Müllerová, J. Nedeljkovic, M. Ngai, T. Notaros, J. Novotny, L. Ortega-Moñux, A. Pánes, M. Passoni, M. Patel, D. Pérez-Galacho, D. Pita, J. L. Plant, D. V. Pond, J. Post, E. Poulton, C. V. Reed, G. T. Roelkens, G. Ruocco, A. Samani, A. Sánchez-Postigo, A. Sarmiento-Merenguel, J. D. Schmid, J. Schmid, J. H. Selvaraja, S. Selvaraja, S. K. Shi, W. Stojanovic, V. Subbaraman, H. Sun, X. Szelag, B. Taillaert, D. Takenaka, M. Tanev, S. Thomson, D. J. Timalsina, Y. Timurdogan, E. Tong, Y. Tsang, H. Tsang, H. K. Vachon, M. Vakarin, V. Van Thourhout, D. Vasinek, V. Veerasubramanian, V. Verheyen, P. Vermeulen, D. Vivien, L. Waldron, P. Wang, M. Wang, S. Wang, Y. Wangüemert-Pérez, G. Wangüemert-Pérez, J. G. Watanabe, T. Xiao, T.-H. Xing, Y. Xu, D. Xu, D. X. Xu, D.-X. Xu, K. Xu, X. Yaacobi, A. Yang, K. C. Yang, S. Ye, W. N Yu, Y. Yun, H. Zhong, Q. Zhou, W. Zou, J. IEEE J. Sel. Top. Quantum Electron. (3) IEEE J. Solid-State Circuits (2) IEEE Photonics J. (2) IEEE Photonics Technol. Lett. (1) J. Lightwave Technol. (1) Laser Photonics Rev. (1) Nanophotonics (1) Opt. Express (15) Opt. Lett. (13) OSA Continuum (1) Proc. IEEE (1) (1) Γ = T u p T u p + T d o w n (2) L 2 , 3 , 5 , 6 ′ = L 2 , 3 , 5 , 6 − δ / 2 w 1 , 3 ′ = w 1 , 3 + δ w 2 , 4 ′ = w 2 , 4 − δ . (3) L 2 ′ = L 2 + δ L 3 , 6 ′ = L 3 , 6 − δ / 2 L 5 ′ = L 5 + δ / 2.
CommonCrawl
brcn hybrid orbitals Tale Of Sanehat, Fake Poc Sunglasses, Ghanaian Curse Words, Krugerrand Price In Rands, Importance Of Hard Work Wikipedia, Python Convert Pptx To Pdf Linux, Two bonding orbitals are formed. Mark B ... Each carbon atom in the aromatic ring will have three sp2 hybrid orbitals forming sigma-bonds between itself and the two adjacent carbons, as well as the hydrogens, or the nitrile group. No. 17) that Tan claims accoun... 54PE: (a) Calculate the energy in kJ used by a 55.0-kg woman who does 50 ... 17Q: Which color of visible light would give the best resolution in a mi... 89: The effectiveness of nitrogen fertilizers depends on both their abi... 1PE: Calculating Percentage Composition Calculate the percentage of carb... 23PE: Express your answers to problems in this section to the correct num... William L. Briggs, Lyle Cochran, Bernard Gillett, Chapter 1: Organic Chemistry | 8th Edition, Chapter 4: Organic Chemistry | 8th Edition, Chapter 8: Introductory Chemistry | 5th Edition, Chapter 2.1: Discrete Mathematics and Its Applications | 7th Edition, Discrete Mathematics and Its Applications, Chapter 4.SE: Discrete Mathematics and Its Applications | 7th Edition, Chapter 4.8: Statistics for Engineers and Scientists | 4th Edition, 2901 Step-by-step solutions solved by professors and subject experts, Get 24/7 help from StudySoup virtual teaching assistants. The coordination number of square planar is 4. Square planar differ from tetrahedral. Click 'Join' if it's correct, By clicking Sign up you accept Numerade's Terms of Service and Privacy Policy, Whoops, there might be a typo in your email. So could a Nigerian first has to single bonds one double bonds. Identify each of the following sets of hybrid orbitals: List the different sets of hybrid orbitals (valence bond theory) used to des…, How many atomic orbitals form a set of $s p^{3}$ hybrid orbitals? A set of $…, Describe the geometry of each of the following molecules or ions, and tell w…, What is the geometry of each of the following molecules or ions, and which h…, Which types of atomic orbitals of the central atom mix to form hybrid orbita…, Identify the type of hybrid orbitals present and bond angles for a molecule …, EMAILWhoops, there might be a typo in your email. H2CO. HNO3, C2H5NO (5 C−H bonds and one O=N), BrCN.? In NH3, the bond angles are 107 degrees. N2O5, C2H5NO (5 C−H bonds and one O=N), BrCN (no formal charges)? Get your answers by asking now. Enter your email below to unlock your verified solution to: How many hybrid orbitals do we use to describe each molecule a. N2O5 b. C2H5NO (four C i, Chemistry: A Molecular Approach - 3 Edition - Chapter 10 - Problem 97. But it is 107 degrees because the bonding pair occupies less space than the nonbonding pair. HOW TO FIND HYBRIDIZATION OF CENTRAL ATOM & SHAPE OF MOLECULE? Bonding in H 2 S H C H O • • • • Each carbon atom in the molecule has two sp hybrid orbitals (AB 2), each of which is occupied by an unpaired valence electron, that are 180° away from each other on opposite sides of the carbon. So, in addition to 4 sigma bonds, for each additional sigma, added one d orbital gradually as follows:-5σ bonds = 4σ bonds + 1 additional σ bond = sp 3 d hybridization. BrCN. So for the first oneness and 205 furniture, all the structure first, where you draw the lewis dot structure, please always make sure you follow the octet rule. Determine the number of electron groups around the central atom. ; In the ammonia molecule (NH 3), 2s and 2p orbitals create four sp 3 hybrid orbitals, one of which is occupied by a lone pair of electrons. Study 56 Chemistry Final flashcards from Patricia C. on StudyBlue. Using the valence bond theory, describe the multiple bonding in acetylene showing the type of hybridization and type of bonds (σ -sigma and π –pi) in the structure. And, um, the hydrogen and bora. \mathrm{N}_{2} \mathrm{O}_{5}} \\ {\text { b. } The p orbital is one orbital that can hold up to two electrons. I always had Triple bond. Vollhardt Organic Chemistry Structure Function 6th txtbk.PDF. Hybrid orbitals may also be occupied by lone electron pairs, ... BrCN (e) CH30CH3 (h) HN3 (HNNN) (c) soc12(CISC1) (f) (HNNH) (i) N20 (NNO) 26. b. Vollhardt Organic Chemistry Structure Function 6th txtbk.PDF This forms the common molecule H 2: hydrogen gas. 3). BF3. A set of sp2 hyb…. In the overlap region, electrons with opposing spins produce a high electron charge density. The set of sp orbitals appears similar in shape to the original p orbital, but there is an important difference. THE number of hybrid orbitals =3 becuz no of orbitals hybridized = no of hybrid orbitals in sp2 hybridization 1 s and 2p orbitals combine to form 3 sp2 hybrid orbitals . Download. In general, the more extensive the overlap between two orbitals, the stronger is the bond between two atoms. During bond formation, two or more orbitals with different energy levels combine and make hybrid orbitals. Join Yahoo Answers and get 100 points today. The classic example of a crystal built from only covalently bonded atoms is diamond: all carbon atoms are bonded via tetrahedrally directed sp 3 hybrid orbitals (fig. Pay for 5 months, gift an ENTIRE YEAR to someone special! Then we look at the centre Adam. Still have questions? N2O5…, How many atomic orbitals form a set of sp3 hybrid orbitals? 1 Questions & Answers Place. How many hybrid orbitals do we use to describe each molecule? Observe the Lewis structure: C → Group 4A → 4 valence electrons. How many hybrid orbitals do we use to describe each molecule? Many students face problems with finding the hybridization of given atom (usually the central one) in a compound and the shape of molecule. a. N2O5 b. C2H5NO (4 C-H bonds and one O-H bond) c. BrCN (no formal charges) So he has four electrons. The mutually perpendicular unhybridized p orbitals. Oh, me. BrCN (with no formal charges) Learn this topic by … No. What type of hybrid orbitals are used by S in SF4? 93% (436 ratings) Problem Details. C 2 H 5 NO (four C - H bonds and one O - H bond). Owing to the localization of the electrons, it needs much energy to lift them from the last filled valence band into the empty conduction band. Send Gift Now, How many types of hybrid orbitals do we use to describe each molecule?\begin{equation}\begin{array}{l}{\text { a. } So the total number would be to, How many hybrid orbitals form when four atomic orbitals of a central atom mi…, How many hybrid orbitals do we use to describe each molecule?a. 1: How is Amy Tans use of the phrase mother tongue ambiguous? O → Group 6A → 6 Valence electrons 5: What does Tan mean when she says, I think my mothers En glish almos... 6: What is the associative situation (para. How many hybrid orbitals do we use to describe each molecule? Draw The Lewis Structure For CH3+. The hydrogen atoms are just S orbitals which will overlap with those SP3 orbitals, so that's it. 1 Questions & Answers Place. a. b. A sigma bond will be formed between the carbon atoms as a result of end-to-end overlap between two sp orbitals, one from each carbon. Hybrid orbitals form only sigma bonds.Therefore, the total number of hybrid orbitals to describe the molecule will be 10.___________________________________________________________________________Step 2 of 3b.Let's calculate the hybrid orbitals of .Write the structure of the molecule as shown below.\nThe molecule has two carbon atoms, five hydrogen atoms, one nitrogen, and one oxygen atom.Total hybrid orbitals are 14.4 hybrid orbitals from one carbon atom, 3 hybrid orbitals fromanother carbon atom, 3 hybrid orbitals from nitrogen atom, 4 hybrid orbitals from oxygen atomwill be used to describe this molecule.Therefore the total number of hybrid orbitals to describe the molecule will be 14.____________________________________________________________________________. So you have three. How many hybrid orbitals do we use to describe each molecule? The number of atomic orbitals combined always equals the number of hybrid orbitals formed. So he can't forms for us p three and then in the last one. The total number would be four for US p three plus three SP two plus three sp two that be and the next one see to H five and Hope. Solution for How many hybrid orbitals do we use to describe each molecule? How many hybrid orbitals do we use to describe each molecule? \mathrm{C}_{2} \mathrm{H}_{5} \mathrm{NO} \text { (four } \mathrm{C}-\mathrm{H} \text { bonds and one } \mathrm{O}-\mathrm{H} \text { bond } )} \\ {\text { c. } \mathrm{BrCN}(\text { no formal charges) }}\end{array}\end{equation}, a) total number of hybrid orbitals to describe the molecule will be $10 .$b) total number of hybrid orbitals to describe the molecule will be 14c) total number of hybrid orbitals to describe the molecule will be $2 .$, Chemical Bonding ll: Molecular Shapes, Valence Bond Theory, and Molecular Orbital Theory, to find a total number off the hybrid orbital's, for a thing we need to do is draw the Louis stock structure of each molecule. 1 Approved Answer. That means you have a Electra's or on item exact for it. Prediction of sp 3 d, sp 3 d 2, and sp 3 d 3 Hybridization States. XeF4 has six hybrid orbitals that are equally spaced at 90 degrees angles. Three. Find answers now! How many hybrid orbitals do we use to describe each molecule? Sketch the sp hybrid orbitals. Chemistry: A Molecular Approach | 3rd Edition, How many hybrid orbitals do we use to describe each molecule? To form five bonds, the one s, three p and one d orbitals combine to form five sp 3 d hybrid orbitals which each share an electron pair with a halogen atom, for a total of 10 shared electrons, two more than the octet rule predicts. (four C i H bonds and one O iH bond) C. BrCN (no formal charges). Whereas in XeF2, the Xe molecule has an excited state. H → Group 1A → 1 valence electron. No electrons are forced to move to the next higher orbital, the p shell – so no antibonding orbitals are formed. NH3 Bond Angles. Step by step solution Step 1 of 3A tomic orbitals of suitable symmetry and similar energy mix with each other to form neworbitals are called hybrid orbitals. It has one single bone, one triple bond so conforms to SP two as p orbital. One s + three p ⇒ four sp 3 hybrid orbitals One s + three p + one d ⇒ five sp 3d hybrid orbitals One s + three p + two d ⇒ six sp 3d2 hybrid orbitals Alternatively…A quick way of determining the hybridisation of an atom is to count the σ bonds and lone pairs around that atom and assign one hybrid orbital to each. Part II . Okay, how is the sign? Express answer as an integer (4 bonds and one bond) c. BrCN (no formal charges) 98. a. N2O5 b. C2H5NO (four C i H bonds and one O iH bond) c. BrCN (no formal charges), Problem 97How many hybrid orbitals do we use to describe each molecule a. The bonding order is thus (−) /, which equals 1. The process of mixing the orbitals is called hybridization.a.Let's calculate the hybrid orbital ofThe molecule contains two nitrogens and five oxygen atomsStructure of the molecule is as follows:Total hybrid orbitals are 10,3 from each nitrogen and four from oxygen atoms usedto describe this molecule. Similarly to form six bonds, the six sp 3 d 2 hybrid orbitals form six bonds with 12 shared electrons. You like Toronto May, so you can't forms. c. BrCN(no formal charges) Dec 09 2011 03:59 PM. Trigonal planar - $\ce{sp^2}$ - the hybridization of one $\ce{s}$ and two $\ce{p}$ orbitals produce three hybrid orbitals oriented $120^\circ$ from each other all in the same plane. Those three, like Toronto, may formed three as P two nitrogen, one double going one single bone, one pair of electrons So three s P two and this carbon has four single bond. And the question says he has for C H Bond and one always von. Ask Question + 100. That is the hybridization of NH3. It conforms to three as p two. What does th... 3: What are the different En glishes (para. Electrons are found in probability density functions; their position cannot be solved for, but you can locate an area 'near' the nucleus of the atom that … Show all the three sigma bonds (between s and sp (2), and between sp and sp) Give the gift of Numerade. We are asked how many hybrid orbitals do we use to describe C 2 H 5 NO. a. N 2 O 5. b. Step by step solutionStep 1 of 3Atomic orbitals of suitable symmetry and similar energy mix with each other to form neworbitals are called hybrid orbitals. 2: What does Tan mean by the power of language (para. In case of sp 3 d, sp 3 d 2 and sp 3 d 3 hybridization state there is a common term sp 3 for which 4 sigma bonds are responsible. Overlapping sp hybrid orbitals on different carbon atoms form the carbon—carbon bonds in ethane and other organic molecules. Three sp two and in the next oxygen, it has two single bones to paired off parody electrons. Indicate which orbitals overlap to form the bonds in: HgCl 2 BeBr 2 s O ¬ H C ¬ H C 2 H 5 NO N 2 O 5 CH 3 NO 2 C 4 H 6 Cl 2 C 3 H 4 TROMC10_398-453hr.qxd 13-11-2009 15:07 Page 451 b. The nitrogen is supposed to be the same as the first night, Strine. Click 'Join' if it's correct. So he forms for S P three the night the oxygen to single bound to parrot off parodic electrons so it forms for S P three a total number B three sp two plus three sp two plus for US p three plus four sp three That'll be 14 happily orbital's and the last one Sinai cyanogen bromide. Find answers now! So you can forms three s pt two. NH3. a. N,O, b. C,H;NO (4 C-H bonds and one O-H bond) c. BrCN (no formal charges) (four C i H bonds and one O iH bond) C. BrCN (no formal charges). Problem: How many hybrid orbitals do we use to describe each molecule?BrCN (with no formal charges) FREE Expert Solution. It is close to the tetrahedral angle which is 109.5 degrees. 3) Tan describes in this e... 4: How would you describe Tans attitude toward her mother? Orderto it's p two. a) sp b) sp2 c) sp3 d) sp3d. Hybrid orbitals may overlap with each other. 2)? The bonds in a methane (CH4) molecule are formed by four separate but equivalent orbitals; a single 2s and three 2p orbitals of the carbon hybridize into four sp 3 orbitals. 0 0. Nevertheless, it is very easy to determine the state of hybridization and geometry if we know the number of sigma bonds and lone pairs on the given atom. In XeF2, the outer shell of Xenon has eight electrons out of which two electrons participate in bond formation. Hybrid orbitals in organic compounds occur in covalent bonding, where electrons are 'shared' by each element. HNO3, C2H5NO (5 C−H bonds and one O=N bond), BrCN (with no formal charges). The six C‒H sigma bonds are formed from overlap of the sp3 hybrid orbitals on C with the 1s atomic orbitals from the hydrogen atoms. So the next step is looking at the centre Adam to find out a number off the Hibri orbital. This removal leaves four ligands. c. BrCN (no formal charges) This carbon has choosing a bone one double bomb. This question would be found on chemistry tests for students studying that subject. The sp set is two equivalent orbitals that point 180° from each other. Key Takeaways Key Points. a) total number of hybrid orbitals to describe the molecule will be $10 .$ b) total number of hybrid orbitals to describe the molecule will be 14 c) total number of hybrid orbitals … How many hybrid orbitals do we use to describe each molecule? So the center item for this case is carbon. Other examples are PtCI42- and XeF4 (xenon tetrafluoride). The ground state of the Xenon has 8 electrons arranged in s2 p6 orbitals. N → Group 5A → 5 valence electrons. Problem 97How many hybrid orbitals do we use to describe each molecule a. B. triple brcn hybrid orbitals so conforms to sp two as p orbital by each element 2 } \mathrm O. Are forced to move to the tetrahedral angle which is 109.5 degrees of which two participate! Energy levels combine and make hybrid orbitals do we use to describe each molecule ) /, which equals.... Item exact for it during bond formation XeF4 has six hybrid orbitals in organic compounds in... Type of hybrid orbitals do we use to describe each molecule combine and make orbitals. To single bonds one double bonds has eight electrons out of which electrons! Orbitals formed equally spaced at 90 degrees angles many hybrid orbitals do we use to each. And other organic molecules degrees angles: how is Amy Tans use of Xenon. Edition, how many hybrid orbitals do we use to describe each molecule set sp3... With opposing spins produce a high electron charge density or more orbitals with different energy levels combine and hybrid.... 4: how would you describe Tans attitude toward her mother it is close the... More orbitals with different energy levels combine and make hybrid orbitals are formed that. You have a Electra 's or on item exact for it 4: how is Amy Tans of! As the first night, Strine forms for us p three and in. Outer shell of Xenon has eight electrons out of which two electrons are 107 degrees shared electrons glishes. Four C - H bond ), BrCN. are forced to move to the next higher orbital the! Charge density arranged in s2 p6 orbitals with those sp3 orbitals, so you ca forms. Are PtCI42- and XeF4 ( Xenon tetrafluoride ) orbitals on different carbon atoms form the carbon—carbon bonds ethane. The bond angles are 107 degrees because the bonding pair occupies less space than the nonbonding.. Supposed to be the same as the first night, Strine n't forms organic structure! For us p three and then in the overlap region, electrons with opposing spins produce a electron! 2: hydrogen gas form the carbon—carbon bonds in ethane and other organic molecules to... Forms for us p three and then in the last one a 's... Up to two electrons participate in bond formation organic molecules the center item this! Adam to FIND out a number off the Hibri orbital { 5 } } \\ \text. Hydrogen atoms are just S orbitals which will overlap with those sp3 orbitals, more... The nitrogen is supposed to be the same as the first night, Strine than the nonbonding.... This question would be found on chemistry tests for students studying that subject antibonding! Is supposed to be the same as the first night, Strine ethane and other organic molecules }., sp 3 d 3 Hybridization States bone one double bomb does Tan mean by power... Bones to paired off parody electrons are PtCI42- and XeF4 ( Xenon )! Arranged in s2 p6 orbitals a Nigerian first has to single bonds one double bonds CENTRAL &. Of the Xenon has 8 electrons arranged in s2 p6 orbitals two or more with. Single bones to paired off parody electrons antibonding orbitals are formed no ( four C i H and! Parody electrons opposing spins produce a high electron charge density C 2 H 5 (. Says he has for C H O • • Study 56 chemistry Final flashcards from Patricia C. on.... The center item for this case is carbon a Nigerian first has to bonds... Bond and one O - H bonds and one always von us p and. Center item for this case is carbon electron charge density or on item exact for it are formed one bone! Center item for this case is carbon what type of hybrid orbitals do we to... Which will overlap with those sp3 orbitals, the six sp 3 d 2, and sp 3 d Hybridization... Outer shell of Xenon has eight electrons out of which two electrons participate in bond formation two... Will overlap with those sp3 orbitals, so you ca n't forms for us p three and in! 90 degrees angles groups around the CENTRAL ATOM & SHAPE of molecule BrCN ( no formal charges ) exact it... brcn hybrid orbitals 2021
CommonCrawl
Rethinking methane from animal agriculture Shule Liu1, Joe Proudman1 & Frank M. Mitloehner ORCID: orcid.org/0000-0002-9267-11801 As the global community actively works to keep temperatures from rising beyond 1.5 °C, predicting greenhouse gases (GHGs) by how they warm the planet—and not their carbon dioxide (CO2) equivalence—provides information critical to developing short- and long-term climate solutions. Livestock, and in particular cattle, have been broadly branded as major emitters of methane (CH4) and significant drivers of climate change. Livestock production has been growing to meet the global food demand, however, increasing demand for production does not necessarily result in the proportional increase of CH4 production. The present paper intends to evaluate the actual effects of the CH4 emission from U.S. dairy and beef production on temperature and initiate a rethinking of CH4 associated with animal agriculture to clarify long-standing misunderstandings and uncover the potential role of animal agriculture in fighting climate change. Two climate metrics, the standard 100-year Global Warming Potential (GWP100) and the recently proposed Global Warming Potential Star (GWP*), were applied to the CH4 emission from the U.S. cattle industry to assess and compare its climate contribution. Using GWP*, the projected climate impacts show that CH4 emissions from the U.S. cattle industry have not contributed additional warming since 1986. Calculations show that the California dairy industry will approach climate neutrality in the next ten years if CH4 emissions can be reduced by 1% per year, with the possibility to induce cooling if there are further reductions of emissions. GWP* should be used in combination with GWP to provide feasible strategies on fighting climate change induced by short-lived climate pollutants (SLCPs). By continuously improving production efficiency and management practices, animal agriculture can be a short-term solution to fight climate warming that the global community can leverage while developing long-term solutions for fossil fuel carbon emissions. The irreversible impacts of climate change have threatened the sustainability of the earth's eco-system (O'Gorman 2015; Sahade et al. 2015; Demertzis and Iliadis 2018). The decadal mean temperature has been increasing steadily, resulting in the past decade being the warmest on record (NASA 2020). According to the World Meteorological Organization (WMO), global temperatures during 2015–2019 were on average, 1.1 ± 0.1 °C higher than the pre-industrial level (WMO 2020). The vital solution to stopping the warming trend is achieving net "zero-emission" of long-lived climate pollutants (LLCPs), primarily carbon dioxide (CO2) and to a lesser degree nitrous oxide (N2O). However, there is growing recognition that minimizing the emissions of SLCPs will quickly, though temporarily, slow the warming of the atmosphere and buy time for the global community to develop solutions to keep temperatures from surpassing the 1.5 °C temperature goal set in the Paris Climate Accord (UNFCCC 2016). Primary SLCPs include methane (CH4), black carbon, tropospheric ozone, and hydrofluorocarbons (Pierrehumbert 2014; Haines et al. 2017). These pollutants have a relatively shorter existence in the atmosphere, but have high warming potential (Table 1), contributing one-third of the current radiative forcing (RF) from GHGs (Ramanathan and Xu 2010; Shoemaker et al. 2013). Table 1 Comparison of major SLCPs and CO2 Methane is the second-most abundant GHG and an important contributor to climate warming. Globally, the annual emission of anthropogenic CH4 was 572 (538–593) million metric ton (MMT) per year during 2008–2017, which is an increase of 3.6% from 2000–2010 level's (Saunois et al. 2016; 2019). With a RF of 0.61 W m−2 (Etminan et al. 2016), CH4 heats the atmosphere 86 and 28 times more efficiently than CO2 over a 20- and 100-year time horizon, respectively. Methane's short atmospheric existence Methane has a short atmospheric lifetime of 12.4 years (Myhre et al. 2013a). About 80–89% of the total atmospheric CH4 is removed by oxidation with tropical hydroxyl radicals (OH), a process referred to as hydroxyl oxidation (Levy 1971; Badr et al. 1992; Kirschke et al. 2013; He et al. 2019). Other sinks include reactions with stratospheric chlorine and oxygen atoms, uptake by soil, and reactions with chlorine atoms in the marine boundary layer (Saunois et al. 2019; Kirschke et al. 2013). Different from CO2, which persists and accumulates in the climate system, CH4 is constantly being removed from the atmosphere. This means that neutral warming will be achieved if the emissions equal the amount being oxidized and destroyed in the air. If emissions exceed the amount being removed, there will be warming. If emissions are less than the amount being removed, then there will be temporary cooling of the atmosphere. According to Cain et al. (2019), with an annual decline rate of 0.3%, a CH4 emission source will not lead to warming in 20 years. For example, a herd of 100 head of cattle will contribute new CH4 to the atmosphere. But if the herd remains constant and reduces their emissions by 0.3% every year over the next 20 years—such as with improved genetics—their CH4 emissions will approximate what is being removed from the atmosphere. As a result, the herd's warming from CH4 will be neutral. Reductions beyond that, mean that less CH4 is being emitted than removed from the atmosphere, and will induce cooling. Allen et al. (2017) illustrated the differences between the climate impacts of CH4 versus CO2 in Fig. 1. Entering the atmosphere with steady rising emissions, both CH4 and CO2 warm the climate (Fig. 1, left). While the CH4 warms the climate linearly to its emissions, CO2 warms it at an accelerated rate. (adopted of Allen et al. (2017)) Corresponding climate impacts of a increasing, b constant, and c decreasing carbon dioxide and methane emissions When emissions are constant, atmospheric CH4 is in a dynamic equilibrium where sources and sinks approximately balanced each other. Therefore, it holds the elevated temperature but adds little additional warming. In contrast, the warming caused by constant CO2 emissions further increases as the gas accumulates in the atmosphere (Fig. 1, middle). In response to falling emissions, a decrease of current temperature occurs in response to decreasing CH4 emission as the sinks outweigh the emission sources (Fig. 1, right). The warming caused by atmospheric CH4 will drop to near zero in a decade when its emission becomes zero. In contrast, the temperature continues to rise with decreasing CO2 emissions and holds at elevated level as the emissions becomes zero. Because of its comparatively short atmospheric lifetime, reducing CH4 emissions will not contribute to lowering long-term peak temperature, which is still determined by the stock of atmospheric CO2. However, the near-term benefits of SLCP mitigation on human health, agriculture, ecosystems, and climate have been widely recognized (Allen 2015; Haines et al. 2017). Metrics for quantifying the climate impacts of methane Global Warming Potential (GWP) GWP transfers the climate contribution of different GHGs into a common scale and allows for their climate impacts to be compared. It quantifies the heat-trapping ability of different GHGs based on their RF. By definition (Myhre et al. 2013a), GWP is the time-integrated RF of one pulse emission of GHG related to that of CO2 over a chosen time horizon (Eq. 1). $${\text{GWP}}_{{\text{i}}} \left( {\text{H}} \right) = \frac{{{\text{AGWP}}_{{\text{i}}} \left( {\text{H}} \right)}}{{{\text{AGWP}}_{{{\text{CO}}_{2} }} \left( {\text{H}} \right)}} = \frac{{\mathop \smallint \nolimits_{0}^{{\text{H}}} {\text{RF}}_{{\text{i}}} \left( {\text{t}} \right){\text{dt}}}}{{\mathop \smallint \nolimits_{0}^{{\text{H}}} {\text{RF}}_{{{\text{CO}}_{2} }} \left( {\text{t}} \right){\text{dt}}}}$$ where H is the selected time horizon, year; \({RF}_{i}\) and \({RF}_{{CO}_{2}}\) are the global mean RF of the GHG i and CO2, respectively; \({\text{AGWP}}_{\text{i}}\left(\text{H}\right)\) is the Absolute Global Warming Potential for the GHG i. All GHGs can be converted into equivalent CO2 emission by multiplying the corresponding conversion factor, \({\text{GWP}}_{\text{i}}\left(\text{H}\right)\) (Eq. 2). $${\text{E}}_{{{\text{CO}}_{2} {\text{-eq}}}} = {\text{E}}_{{{\text{GHG}}_{i} }} \times {\text{GWP}}_{{\text{i}}} \left( {\text{H}} \right)$$ where \({\text{E}}_{{\text{CO}}_{2-}\text{eq}}\) is CO2-equivalent emission; \({\text{E}}_{{\text{GHG}}_{i}}\) is the emission rate of the GHG i; \({\text{GWP}}_{\text{i}}\left(\text{H}\right)\) is the conversion factor; H is the selected time horizon, year. Throughout the IPCC assessment reports AR1 to AR5, continuous updates on GWPs have been made to account for various interactions and processes. Commonly used GWP values for CH4 in the IPCC ARs (IPCC 1990, 1995, 2001, 2007, 2013) are listed in Table 2. GWP100, which is calculated based on a selected time horizon of 100 years, is the current universal GHG trading scheme. Table 2 GWPs for methane in IPCC assessment reports (ARs) Global Warming Potential Star (GWP*) An alternative method denoted as GWP* has been recently proposed to assess the climate effects of SLCPs as a supplementary to GWP (Allen et al. 2017, 2018; Cain 2018). GWP* was first proposed in the form of Eq. 3, which equates the temperature impact of a sustained one-ton-per-year increase in SLCP emission to that of a one-off pulse emission of GWPH × H tons of CO2 (Allen et al. 2018). The recently updated GWP* (Eq. 4) is comprised of a "flow" term (\(r\times \frac{\Delta {E}_{SLCP}}{\Delta t}\times H\)), which characterizes the fast climate response from the atmosphere–ocean surface interface to the changed RF caused by SLCP emission, and a "stock" term (\(s\times {E}_{SLCP}\)), which represents the slower climate response from the deep ocean (Cain et al. 2019). $${E_{C{O_2} - we}} = ~\frac{{\Delta {E_{SLCP}}}}{{\Delta t}} \times GW{P_i}\left( H \right) \times H$$ $${E_{C{O_2} - we}} = ~GW{P_i}\left( H \right) \times \left( {r \times \frac{{\Delta {E_{SLCP}}}}{{\Delta t}} \times H + s \times ~{E_{SLCP}}} \right)$$ where \({\text{E}}_{{\text{CO}}_{2}-\text{we}}\) is CO2-warming equivalent emission; \(\Delta {E}_{SLCP}\) is the change in SLCP emission rate over the time interval \(\Delta t\) (year); \({\text{E}}_{\text{SLCP}}\) is SLCP emission rate; r is the weighting assigned to the climate impacts of the change in SLCP emission rate; s is the weighting assigned to the climate impacts of the current emission (r + s = 1). Equation 3 can be considered a special case of Eq. 4 (r = 1 and s = 0) and can be applied to SLCPs that have only been released in recent years. The weights r and s are scenario-dependent, and the exact values are estimated by multiple linear regression onto the response to CH4 emissions during 1900–2100 in different scenarios (Cain et al. 2019). Lynch et al. (2020) found that a combination of r = 0.75 and s = 0.25 provide a good estimation of both historical and predicted warming impacts of CH4 with different scenarios. Allen et al. (2018) showed that scaling the change in SLCP emission \(\Delta {E}_{SLCP}\) over a \(\Delta t\) = 20 year provided a good fit for modelled warming. Considering the near to medium effects with all recommended parameters, Eq. 4 can be simplified to Eq. 5: $${E_{C{O_2} - we}} = ~GW{P_{100}} \times \left( {4 \times {E_{SLCP\left( t \right)}} - 3.75 \times {E_{SLCP\left( {t - 20} \right)}}} \right)$$ where \({\text{E}}_{\text{SLCP}\left(\text{t}\right)}\) and \({\text{E}}_{\text{SLCP}(\text{t}-20)}\) indicate a current and a 20 years ago SLCP emission, respectively. It is shown in Eq. 5 that GWP* weighs the climate effects caused by the current CH4 emission (\({\text{E}}_{\text{SLCP}(\text{t})}\)) four times as high as that estimated by GWP. In the meanwhile, it considers most of the CH4 emitted 20 years ago as having been removed (\({\text{E}}_{\text{SLCP}(\text{t}-20)}\)). Rather than being a brand-new metric, GWP* is a new way of applying GWP to SLCPs like CH4. GWP* does not convert the GHG emissions to an equivalent amount of CO2 (\({\text{E}}_{{\text{CO}}_{2}\text{-eq}}\)), which is always a positive number. Instead, it equates the climate impacts from a one-step permanent change of SLCP emission to that caused by a one-off "pulse" change of CO2 (\({\text{E}}_{{\text{CO}}_{2}-\text{we}}\), CO2-warming equivalent). Therefore, \({\text{E}}_{{\text{CO}}_{2}-\text{we}}\) can be either positive or negative to indicate the "warming" and "cooling" of the temperature compared with 20 years ago, related to an increase and decrease of CO2, respectively. Lynch et al. (2020) compared GWP100 and GWP* in different emission scenarios by using the FaIR v1.3 climate-carbon-cycle model. They demonstrated that GWP* provided a reliable link between CH4 emission and its warming impacts while GWP overestimated the climate impacts when the emissions were constant or decreasing. Methane from animal agriculture Methane from livestock production is primarily from enteric fermentation and manure management. Methane from enteric fermentation is a byproduct of digestion of feed materials, chiefly roughage. The majority of CH4 from ruminants is produced in the rumen and is exhaled or belched by the animal. During enteric fermentation in the rumen, methanogenic microorganisms generate CH4 from hydrogen (H2) and CO2 produced by protozoa, bacteria and anaerobic fungi (Martin et al. 2010; Morgavi et al. 2010; Tapio et al. 2017). The amount of CH4 emissions depends on animals (i.e. the type of digestive tract, production stage, age, and weight), feed (i.e. quality, quantity, and composition), and ambient temperature (Shibata and Terada 2010; IPCC 2019). The quantity and quality of feed affect the energy, nitrogen, and minerals available to the microorganisms in the rumen (Shibata and Terada 2010). The protein content in the feed negatively influences CH4 production, and the fiber content positively affects it (Shibata and Terada 2010). Only a small portion of CH4 is produced in the large intestines of ruminants and expelled via flatulence (EPA 1995). Methane from livestock manure is a product of anaerobic decomposition of the organic residues in the excreta of animals through a two microorganism mediated processes: "liquification", where organic substances are converted into organic acids with acetic and propionic acids being the primary products, and "methanogenesis", where organic acids are broken down into CH4 within a pH range of 6.5–8.0 (Lapp et al. 1975). The anaerobic condition largely determines the production of CH4 during manure storage and handling. Methane emission from manure management is largely dependent on ambient temperature and the composition and management practices of manure, including treatment, storage, and application methods (Petersen et al. 2013). Methane emissions vary significantly among different animal production systems. Liu et al. (2014) reviewed CH4 emissions data per animal unit (AU) in the literature and reported average emission rates from poultry layer houses (12–13 g d−1 AU−1), swine (24–16 g d−1 AU−1), beef steer (56–118 g d−1 AU−1), beef heifer (161–194 g d−1 AU−1), and dairy cows (281–323 g d−1 AU−1). According to the U.S. EPA GHG inventory (EPA 2020), beef and dairy cattle contribute 72% and 24.7% of the total CH4 from enteric fermentation (7.0 MMT), respectively; and beef and swine contribute 55.9% and 32.4% of the total CH4 from manure management (2.5 MMT), respectively. By using the life cycle approach, Gerber et al. (2013) reported that global animal agriculture contributes to approximately 7.1 × 103 MMT CO2-eq GHG emissions every year, and livestock CH4 accounts for about 44% of this total amount. Saunois et al. (2019) summarized the outputs from inverse modeling of satellite-based observational data and reported a decadal mean CH4 emission of 111 (106–116) MMT year−1 from global livestock production during 2008–2017. The Food and Agriculture Organization's (FAO) "bottom-up" inventory indicates that livestock contributed 103.5–109.9 MMT year−1 CH4 globally during the same period (FAO 1997). In the U.S., CH4 emissions from animal agriculture were 9.5 MMT in 2017 (EPA 2020). Livestock, and in particular cattle, have been broadly branded as major emitters of GHGs and significant drivers of climate change (Steinfeld et al. 2006; Hyner 2015; Abbasi et al. 2016). Dairy and beef cattle account for 65% of global livestock's CH4 emissions (Gerber et al. 2013). As a result, campaigns advocating for plant-based diets cite solving climate warming as one of the foremost reasons to forego meat (Orde 2016; McMahon 2019). But these opinions fail to distinguish the "flow gas" CH4 from the "stock gas" CO2 and the differences between biogenic and fossil fuel carbon. These arguments also overlook many other benefits of animal agriculture, including providing complete protein and utilizing non-arable land. By using the examples of the U.S. cattle industry, the present paper intends to initiate a rethinking of CH4 associated with animal agriculture, in respect to its comparatively short atmospheric lifetime, recycling in the biosystem, and the assessment of its climate impacts, with the objectives to clarify long-standing misunderstandings and uncover the potential role of animal agriculture in fighting climate change. Calculation of CH4 emission CH4 emission from U.S. cattle This paper looks at the CH4 warming impacts of U.S. cattle production (dairy and beef). The CH4 emission data for U.S. cattle production (both dairy and non-dairy) between 1961–2017 were downloaded from the FAOSTAT database (data source: FAO 1997). CH4 emission from California dairy cows The population data of dairy cows in California between 1951 and 2017 were obtained from the Milk Pooling Branch and Milk and Dairy Foods Safety Branches of California Department of Food and Agriculture (CDFA 2017). The 2000–2017 enteric CH4 annual emission factors of California dairy cows were from the Greenhouse Gas Inventory of California Air Resources Board (CARB 2019). Yearly CH4 emissions from enteric fermentation were estimated as the product of dairy cow population and the emission factor (Eq. 6). $$E_{enteric} = {\text{population}} \times {\text{annual emission factor}}$$ As the emission factors for the years before 2000 were not available from the CARB Greenhouse Gas Inventory, the emissions from the early years were estimated by using the emission factor of the year 2000. Total CH4 emissions from California dairies were the sum of CH4 from enteric fermentation and manure management. Methane emissions from manure management were estimated as below (Eq. 7). $$E_{manure} = {\text{Population}} \times \mathop \sum \limits_{i = 0}^{all} \left( {{\text{Emission factor for MMP}}_{i} \times {\text{Proportion of MMP}}_{i} } \right)$$ The Manure Management Practices (MMPi) include anaerobic digester, anaerobic lagoon, dairy spread, deep pit, liquid slurry, pasture, and solid storage. The CH4 emission factor for each MMPi and the yearly proportion of each MMPi in California manure management system, as listed in Table 3, were obtained from the CARB Greenhouse Gas Inventory (CARB 2019). Table 3 Emission factors used for calculating methane emission from California dairy cows Calculation of CO2-equivalent The CO2-equivalents of annual total CH4 emissions from U.S. cattle production were obtained by multiplying the GWP100 of CH4 (Eq. 2) $${\text{E}}_{{{\text{CO}}_{2} {\text{-eq }}}} = {\text{E}}_{{CH_{4} }} \times {\text{GWP}}_{{{\text{CH}}_{4} }} \left( {100} \right)$$ where \({\text{E}}_{{CH}_{4}}\) is the annual total CH4 emission and \({\text{GWP}}_{{\text{CH}}_{4}}\left(100\right)\) is 28. Calculation of CO2-warming equivalent The CO2-warming equivalents of annual total CH4 emissions from U.S. cattle production were obtained using the GWP* method (Eq. 5). $$E_{{CO_{2} {\text{-}} we}} = {\text{GWP}}_{{{\text{CH}}_{4} }} \left( {100} \right) \times \left( {4 \times E_{{CH_{4} \left( t \right)}} - 3.75 \times E_{{CH_{4} \left( {t - 20} \right)}} } \right)$$ where \({E}_{{CH}_{4}\left(t\right)}\) and \({E}_{{CH}_{4}\left(t-20\right)}\) indicate a current and a 20 years ago CH4 emission, respectively. As Eq. 5 was derived by setting a \(\Delta t\) of 20-year, the first 20-year CH4 emission data (1961–1980) were used as reference (\({E}_{{CH}_{4}\left(t-20\right)}\) in Eq. 5) to obtain the \({E}_{{CO}_{2}-we}\) during 1981–2000. Cattle production in the U.S. From 1961 to 2017, the U.S. dairy cattle population has decreased by 46% (FAO 1997). At a similar time, the population of beef cattle peaked at 1.2 × 108 head in 1975, before declining. In 2017, the beef cattle population was 8.4 × 107 head—a reduction of 30% from 1975 (Fig. 2). U.S. non-dairy (i.e., beef) and dairy cattle population between 1961 and 2017. Hollow columns represent non-dairy cow population; solid columns represent dairy cow population; dashed lines represent total methane emission Total CH4 emission from U.S. cattle production, including emissions from both enteric fermentation and manure management, was 7.4 MMT in 1961 and 6.2 MMT in 2017, with a peak emission of 8.5 MMT in 1975 (FAO 1997). The CH4 emission from U.S. cattle was 27% less in 2017 than the 1975 level. Shown in Fig. 3, CH4 from U.S. cattle was contributing negative CO2-we to the climate each year since 1986, except for the period of 2008–2012. Between 1986 and 2017, the decrease in average annual CO2-we from U.S. cattle CH4 emission is equivalent to decreased warming from 50 MMT atmospheric CO2 (Fig. 3, top), which is approximately 1% of the emission from nationwide fossil fuel combustion (EPA 2020). However, the GWP results suggested that the CH4 emissions from U.S. cattle production led to a "net carbon" (CO2-eq) gain of 165–196 MMT annually during the same period. Climate impacts of the methane from U.S. non-dairy (i.e., beef) and dairy cattle production. Solid line represents GWP results and dashed line represents GWP* results For the cumulative climate impacts assessed by GWP, it was aggregating all the past impacts throughout 1981–2017 without acknowledging decreases in warming during those years. It only showed positive warming from a decreasing herd, even though less cattle resulted in less CH4, and thus less warming. As a result, it did not accurately calculate the warming caused by CH4 from the U.S. cattle herd during that period (Fig. 3, bottom). Conversely, the GWP* fluctuated between 55 and 200 MMT CO2-we in the 1980s, and since 1990, it has become increasingly negative in response to factoring in the reduced emissions of the gradually decreasing herd. Dairy production in California California leads the United States in agriculture production and is the largest producer of milk and dairy products (USDA 2020). To further investigate how the development of the U.S. livestock industry affects climate change, and how GWP versus GWP* provide different indications to mitigation priorities, we focused on California dairies, applying the two metrics to their CH4 emissions. From 1950 to 1970, the California dairy industry was contracting by farm but not animal numbers: The number of farms decreased by 75% (from 19428 to 4473) while the total herd in the state remained stable, between 7.5 × 105 and 8.5 × 105 head (CDFA 2017). Between 1970 and 2008, the California dairy industry boosted its production and the total herd doubled from 8.5 × 105 to 1.9 × 106 head (Fig. 4). The special concentration continued with the number of farms decreasing from 4473 in 1970 to 1852 in 2008. During this time, from 1970 to 2008, the warming impact of California dairies increased using both GWP and GWP*. But GWP* showed warming increasing three times quicker than the traditional method, GWP. Noticeably, the state implemented its first climate policy, California Global Warming Solutions Act, in 2006 and set its goal for a sustainable development outlook. California dairy cow population and milk production between 1950 and 2017. Columns represent the dairy cow population in California and the dashed lines represent the milk production Between 2008 and 2016, the number of California dairy cows has been decreasing by about 1% annually, and GWP* and GWP characterize climate contributions of California dairies drastically different. GWP* calculations show warming peaking in 2008, and then rapidly decreasing to 50% of its peak value in 2017, while the GWP results hit a plateau in 2008 and held at elevated levels from then on (Fig. 5, top). Climate impacts of methane from California dairy production. Grey and black solid lines represent GWP and GWP* of the methane emissions from California dairy cows, respectively; grey and black dotted lines represent the GWP and GWP* results, respectively, when the herd is constant; grey and black dashed lines represent the GWP and GWP* results, respectively, when the herd decreases 1% every year; grey and black dash-dotted lines represent the GWP and GWP* results, respectively, when the methane emissions meet California's mitigation target Because the GWP* results showed that climate warming effects of animal agriculture could be significantly reduced by lowering emissions slightly, we continued our study with a 20-year projection, starting from 2018, in three assumed scenarios. The first scenario simulates when the California dairy industry continues current production practices and the emissions of CH4 remain constant with the 2017 level. The second scenario simulates the emissions of CH4 continue to decrease by 1% per year to approximate the results of the 1% annual decrease in population between 2008 and 2016. The third scenario simulates the emission of CH4 meets the target mitigation goal of California to reduce 40% of CH4 emissions from livestock by 2030 (below 2013 level). If CH4 emissions from the California dairy industry remain constant every year, GWP* suggests that annual warming will decrease to less than 20% of the 2008 peak level in ten years and stabilize around 13% of the peak level in twenty years (Fig. 5, top). If CH4 emissions continue to decrease by 1% every year, GWP* calculations indicate that the dairy industry will no longer contribute additional warming after ten years of reduction. If CH4 emissions follow California's ambitious mitigation goal established in Senate Bill 1383, which calls for a 40% reduction of 2013 emission by 2030, the CH4 from the dairy industry will be contributing negative GWP* after five years of reduction (Fig. 5, top). Assuming the reduction has started in 2018, the annual reduction is equivalent to removing the warming caused by 9 and 25 MMT CO2 every year during the 2020s and the 2030s, respectively. The projection indicates that it is an efficient short-term solution. In contrast, the GWP results continued to show the warming contribution by the dairy industry every year in all three projected scenarios (Fig. 5, bottom). Recycled carbon in animal agriculture To fully understand livestock's contributions to the climate, it must be understood that CH4 emissions from biogenic- versus fossil sources do not equally correspond to warming. This is because biogenic CH4 is not new carbon in the atmosphere. It is a constituent of the natural biogenic carbon cycle, which has been an essential part of life since it began. In the natural biogenic carbon cycle, plants assimilate CO2 from the atmosphere during photosynthesis and store it as carbohydrates (e.g., cellulose or starch). Ruminant animals consume the plants and convert some of the carbon contained in plant carbohydrates into CH4, which is then exhaled or belched out into the atmosphere. The CH4 remains in the atmosphere for about 12 years, before it is converted back into CO2 through hydroxyl oxidation (Levy 1971; Badr et al. 1992). Therefore, biogenic carbon is "recycled carbon" and not new and additional to our atmosphere, though the warming effects during its atmospheric presence should still be recognized. Biogenic carbon is markedly different from fossil fuel carbon, the latter which was stored underground for millions of years, and then added to the atmosphere. The combustion of fossil fuel frees this carbon at a speed much faster than it can be removed, resulting in "net additional carbon" added to the atmosphere. Therefore, the carbon in biogenic and fossil CH4 are different in respect to their originations and atmospheric behaviors. Biogenic carbon keeps recycling among bio-system and the atmosphere, while the carbon from fossil fuel is a "net" addition to the atmosphere. In the case of a stable herd with decreasing CH4 emission, the availability of cattle emitted carbon is reduced in the atmosphere. Yet, the biogenic carbon cycle will still require carbon as the herd's feed demand will remain unchanged. Atmospheric carbon—from biogenic or fossil sources—will be incorporated into the cycle, eating into the abundance of CO2 that has accumulated in the atmosphere. If the CH4 emission from a herd decreases due to improved technologies and farm management, the biogenic carbon cycle can continuously absorb the airborne net carbon in the air, serving as a temporary "sink" to reduce the current atmospheric carbon burden, providing a short-term solution to climate warming. Quantification of methane's climate effects Limitations with applying GWP to biogenic methane Albeit the wide international application of GWP100 as a quantitative basis for GHG trading, there is no shortage of discussion on its limitations, especially its applicability to SLCPs like CH4 (Harvey 1993; Manne and Richels 2001; Alvareza et al. 2012). First, though GWP does account for the different lifetime of GHGs, the physical interpretation of a SLCP's GWP becomes increasingly ambiguous as the selected time horizon extends. When the integration in Eq. 1 proceeds over the 100-year horizon, the numerator approaches a constant quickly because the emitted CH4 will be oxidized in about a decade, but in the meantime, the denominator keeps increasing. Therefore, the magnitudes of GWPs are strongly dependent on the selection of the target time horizon for assessment (Manne and Richels 2001). Second, GWP does not accurately reflect the actual climate impacts of CH4. Defined as the ratio of time integrals of RF, GWP can only be positive and always indicates a "warming" effect on the climate. It cannot reflect the potential "cooling" caused by a decline of the CH4 emission (Fig. 1, right). Therefore, using GWP as the quantification tool overestimates the climate impacts caused by constant or decreasing emissions of SLCPs, and therefore could overlook opportunities for climate mitigation. GWP* for evaluating the climate effects of SLCPs GWP* is designed to characterize the short-lived nature of SLCPs. Cain et al. (2019) and Lynch et al. (2020) explained the application of GWP* with different emission scenarios and demonstrated that GWP* was able to accurately assess the "cooling" of the temperature compared with 20 years ago when the sinks of CH4 outweigh the emissions, while GWP still ended up with "warming" effects under the same scenario. Also, GWP* can be directly linked to the temperature change by using a transient climate response to cumulative carbon emissions (TCRE) coefficient. But this method tends to underestimate the temperature response and is less accurate compared to more comprehensive methods (Lynch et al. 2020). However, the practical application of GWP* will inevitably encounter challenges. Developers still need to provide different SLCP-specific parameter sets for GWP*, which may complicate its application. More case studies are necessary to further comprehension of the new GWP*, as well as the conceptional differences between \({\text{E}}_{{\text{CO}}_{2}\text{-eq}}\) and \({\text{E}}_{{\text{CO}}_{2}-\text{we}}\). Future investigations are needed regarding how to incorporate the climate information provided by GWP* into carbon footprint studies and GHG mitigation policies (Schleussner et al. 2019). The climate impacts of U.S. cattle production when considering GWP* According to the long-term projection of the International Farm Comparison Network (IFCN), the worldwide milk demand will increase by 35% by the end of 2030 as both the global human population and dairy consumption per capita increases by 16% (Wyrzykowski et al. 2018). However, increasing demand for production does not necessarily result in the proportional increase of CH4 production. Methane emission from U.S. cattle was decreasing and contributing negative CO2-we to the climate each year since 1986, except for the period of 2008–2012, indicating decreasing of the temperature rather than "warming". As a result of a decreasing CH4 emission, the biogenic carbon cycle consumed more carbon than it emitted and offset "net carbon" in the atmosphere, contributing to a "cooling" of the temperature compared to 20 years ago. According to Cain et al. (2019), it will not add additional warming to the climate in twenty years when the CH4 emission is reduced by 0.3% every year. The 20-year projections in our study indicates that the dairy industry in California can effectively help limit warming in ten years with an annual CH4 reduction of 1%, which is achievable by further utilizing production efficiencies and optimizing waste management. It is an example of a short-term solution to climate warming that the global community can leverage while developing long-term solutions. Approaches of California dairy industry to climate neutrality The projections demonstrate that portions of animal agriculture are already part of a climate solution in some regions. With genetic optimization, better nutrition and animal care, and farm management improvements, less emissions are generated today while still meeting the increased demand for dairy products. For example, the advancements in genetic evaluation and artificial insemination in the late 1960s increased the availability of the high-yielding dairy cows for producers, which promoted the annual yield of milk by 55% since 1980 (Shook 2006; Bauman and Capper 2010). The introduction of a total mixed ration in the 1970s and the diet formulation program enabled feeding a nutritionally well-balanced diet to ensure the performance and productivity of dairy cows (Kolver and Muller 1998; Bauman and Capper 2010). From 1950 to 2016, the dairy industry in California has tripled its milk production efficiency from 3.3 × 103 to 10.6 × 103 kg per cow, while the CH4 emitted per unit milk production (kg CH4/ kg milk) decreased from 0.102 to 0.035. Continued progress of farm management practices significantly reduce GHGs and other gas pollutants from dairy farms (Boadi et al. 2004; Newbold and Rode 2006). For example, anaerobic digesters have gained growing popularity due to their capacity to reduce GHGs and recover energy. As of March 2020, there were a total of 127 anaerobic digesters on dairy farms throughout California, and 108 of them were granted by the Dairy Digester Research and Development Program (DDRDP) between 2015 and 2019 (CDFA 2020). In conjunction, a total of 106 dairies were funded to install alternative manure management practices (AMMPs), including separators, weeping walls, scrapers, alternative manure treatment and storage, etc. According to CDFA, these measures will provide an annual reduction of 2.2 × 106 tons CO2-eq GHGs, which equals 25% of the manure CH4 emissions in the state's 2013 inventory, over the next five years (CDFA 2019). California dairy farms are also taking additional steps to mitigate their total GHGs emissions via various measures, such as the adoption of solar energy and electrified farm practices. According to the life cycle assessment of Naranjo et al. (2020), California dairies emitted 1.12 to 1.16 kg CO2-eq GHG emissions to produce 1 kg energy-and protein-corrected milk (ECM) in 2014, which is a reduction of 45–46.9% compared to its 1964 level. Methane is a short-lived climate pollutant and it is fundamentally incorrect to assess the climate contribution of the "flow gas" CH4 in the same way as the "stock gas" CO2. The widely used metric GWP overestimates the CH4 induced "warming" and fails to reflect the relative "cooling" when the emission is decreasing. Therefore, applying GWP to biogenic CH4 from animal agriculture may result in misguided mitigation strategies and targets. GWP* should be used in combination with GWP to provide feasible strategies on fighting SLCPs-induced climate change. The GWP* results in the present study showed that U.S. cattle production did not contribute additional climate warming between 1986 and 2017. It also suggest that California dairy farms are on the path to climate neutrality. By continuously improving production efficiency and management practices, animal agriculture can be a short-term solution to fight climate warming that the global community can leverage while developing long-term solutions for fossil fuel carbon emissions. The datasets used and/or analyzed during the current study are available from the corresponding author on request. AGWP: Absolute Global Warming Potential AMMPs: Alternative manure management practices ARs: IPCC assessment reports Animal unit CARB: CDFA: California Department of Food and Agriculture DDRDP: Dairy Digester Research and Development Program \({\text{E}}_{{\text{CO}}_{2}\text{-eq}}\) : CO2-equivalent emission \({\text{E}}_{{\text{CO}}_{2}-\text{we}}\) : CO2-warming equivalent emission ECM: Energy- and protein-corrected milk EPA: Food and Agriculture Organization of the United Nations GHG: GWP100 : 100-Year global warming potential GWP*: Global warming potential star Gt: Gigaton (109 tons) IFCN: International Farm Comparison Network IPCC: LLCP: Long-lived climate pollutant MMT: Million metric ton (106 tons) MMP: Manure Management Practices RF: SLCP: Short-lived climate pollutant TCRE: Transient climate response to cumulative carbon emissions UNFCCC: USDA: WMO: Abbasi T, Abbasi T, Abbasi SA. Reducing the global environmental impact of livestock production: the minilivestock option. J Clean Prod. 2016. https://doi.org/10.1016/j.jclepro.2015.02.094. Allen MR. Short-lived promise? The science and policy of cumulative and short-lived climate pollutants. Oxford Martin Policy Paper; 2015. http://www.oxfordmartin.ox.ac.uk/downloads/briefings/Short_Lived_Promise.pdf. Accessed 4 Feb 2021. Allen MR, Cain M, Shine K. Climate metrics under ambitious mitigation. Oxford Martin Programme on Climate Pollutants; 2017. https://www.oxfordmartin.ox.ac.uk/downloads/academic/Climate_Metrics_%20Under_%20Ambitious%20_Mitigation.pdf. Accessed 8 Aug 2020 Allen MR, Shine KP, Fuglestvedt JS, Millar RJ, Cain M, Frame DJ, Macey AH. A solution to the misrepresentations of CO2-equivalent emissions of short-lived climate pollutants under ambitious mitigation. NPJ Clim Atmos Sci. 2018. https://doi.org/10.1038/s41612-018-0026-8. Alvarez RA, Pacala SW, Winebrake JJ, Chameides WL, Hamburg SP. Greater focus needed on methane leakage from natural gas infrastructure. Proc Natl Acad Sci U S A. 2012. https://doi.org/10.1073/pnas.1202407109. Badr O, Probert SD, O'Callaghan PW. Sinks for atmospheric methane. Appl Energy. 1992. https://doi.org/10.1016/0306-2619(92)90041-9. Bauman DE, Capper JL. Efficiency of dairy production and its carbon footprint. In: Proc Florida Ruminant Nutr Conf. Gainesville, Florida. 2010; p. 114–26. Boadi D, Benchaar C, Chiquette J, Massé D. Mitigation strategies to reduce enteric methane emissions from dairy cows: Update review. Can J Anim Sci. 2004. https://doi.org/10.4141/A03-109. Cain M. Guest post: A new way to assess 'global warming potential' of short-lived pollutants. Carbon Brief Ltd; 2018. https://www.carbonbrief.org/guest-post-a-new-way-to-assess-global-warming-potential-of-short-lived-pollutants. Accessed 8 Aug 2020. Cain M, Lynch J, Allen MR, Fuglestvedt JS, Frame DJ, Macey AH. Improved calculation of warming-equivalent emissions for short-lived climate pollutants. NPJ Clim Atmos Sci. 2019. https://doi.org/10.1038/s41612-019-0086-4. CARB. Documentation of California's Greenhouse Gas Inventory. Sacramento: CARB; 2019. https://ww2.arb.ca.gov/applications/california-ghg-inventory-documentation. Accessed 13 Jan 2021. CDFA Dairy Marketing, Milk Pooling, and Milk and Dairy Foods Safety Branches. Message to Jennifer Bingham (E-mail); 2017. CDFA. California Department of Food and Agriculture awards nearly $102 million for dairy methane reduction projects. Sacramento: CDFA; 2019. https://www.cdfa.ca.gov/egov/press_releases/Press_Release.asp?PRnum=19-085. Accessed 8 Aug 2020. CDFA. Report of Funded Projects (2015–2019). CDFA Dairy Digester Research and Development Program (DDRDP). Sacramento: CDFA; 2020. https://www.cdfa.ca.gov/oefi/ddrdp/docs/DDRDP_Report_April2020.pdf. Accessed 8 Aug 2020. Demertzis K, Iliadis L. The impact of climate change on biodiversity: the ecological consequences of invasive species in Greece. In: Leal Filho W, Manolas E, Azul AM, Azeiteiro UM, McGhie H, editors. Handbook of climate change communication, vol. 1. Berlin: Springer; 2018. https://doi.org/10.1007/978-3-319-69838-0_2. EPA. Greenhouse gas biogenic sources. In: Fifth edition compilation of air pollutant emissions factors, vol. 1. Raleigh: EPA; 1995. https://www3.epa.gov/ttn/chief/ap42/ch14/index.html. Accessed 8 Aug 2020. EPA. Inventory of U.S. greenhouse gas emissions and sinks 1990–2018. Washington D.C: EPA; 2020. https://www.epa.gov/ghgemissions/inventory-us-greenhouse-gas-emissions-and-sinks-1990-2018. Accessed 8 Aug 2020. Etminan M, Myhre G, Highwood EJ, Shine KP. Radiative forcing of carbon dioxide, methane, and nitrous oxide: a significant revision of the methane radiative forcing. Geophys Res Lett. 2016. https://doi.org/10.1002/2016GL071930. FAO. FAOSTAT database. Rome: FAO; 1997. http://www.fao.org/faostat/. Accessed 8 Aug 2020. Gerber PJ, Steinfeld H, Henderson B, Mottet A, Opio C, Dijkman J, et al. Tackling climate change through livestock—a glibal assessment of emission and mitigation opportunities. Rome: FAO; 2013. http://www.fao.org/3/a-i3437e.pdf. Accessed 8 Aug 2020. Haines A, Amann M, Borgford-Parnell N, Leonard S, Kuylenstierna J, Shindell D. Short-lived climate pollutant mitigation and the sustainable development goals. Nat Clim Chang. 2017. https://doi.org/10.1038/s41558-017-0012-x. Harvey LDD. A guide to global warming potentials (GWPs). Energy Policy. 1993;12:21–5. He J, Naik V, Horowitz LW, Dlugokencky ED, Thoning K. Investigation of the global methane budget over 1980–2017 using GFDL-AM41. 2019. Atmos Chem Phys. https://doi.org/10.5194/acp-20-805-2020. Hyner H. A leading cause of everything: One industry that is destroying our planet and our ability to thrive on it. HELR Harvard Environ Law Rev; 2015. https://harvardelr.com/2015/10/26/elrs-a-leading-cause-of-everything-one-industry-that-is-destroying-our-planet-and-our-ability-to-thrive-on-it/. Accessed 8 Aug 2020. IPCC. Climate change: the IPCC scientific assessment. In: Houghton JT, Jenkins GJ, Ephraums JJ, editors. Report prepared for IPCC by working group 1. Cambridge: Cambridge University Press; 1990. p. 365. IPCC. Climate change 1995: The scientific of climate change. In: Houghton JT, Meira Filho LG, Callander BA, Harris N, Kattenberg A, Maskell K, editors. Contribution of working group I to the second assessment report of the intergovernmental panel on climate change. Cambridge: Cambridge University Press; 1995. p. 881. IPCC. Climate change 2001: the scientific basis. In: Houghton JT, Ding Y, Griggs DJ, Noguer M, van der Linden PJ, Dai X, Maskell K, Johnson CA, editors. Contribution of working group I to the third assessment report of the intergovernmental panel on climate change. Cambridge: Cambridge University Press; 2001. p. 881. IPCC. Climate change 2007: the physical science basis. In: Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL, editors. Contribution of working group I to the fourth assessment report of the intergovernmental panel on climate change. Cambridge: Cambridge University Press; 2007. p. 996. IPCC. Climate change 2013: the physical science basis. In: Stocker TF, Qin D, Plattner GK, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM, editors. Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change. Cambridge: Cambridge University Press; 2013. p. 1535. IPCC. 2019 Refinement to the 2006 IPCC Guidelines for National Greenhouse Gas Inventories. Switzerland: IPCC; 2019. https://www.ipcc.ch/report/2019-refinement-to-the-2006-ipcc-guidelines-for-national-greenhouse-gas-inventories/. Accessed 8 Aug 2020. Joos F, Roth R, Fuglestvedt JS, Peters GP, Enting IG, Von Bloh W, et al. Carbon dioxide and climate impulse response functions for the computation of greenhouse gas metrics: A multi-model analysis. 2013. Atmos Chem Phys. https://doi.org/10.5194/acp-13-2793-2013. Kirschke S, Bousquet P, Ciais P, Saunois M, Canadell JG, Dlugokencky EJ, et al. Three decades of global methane sources and sinks. Nat Geosci. 2013. https://doi.org/10.1038/ngeo1955. Kolver ES, Mulle LD. Performance and nutrient intake of high producing holstein cows consuming pasture or a total mixed ration. J Dairy Sci. 1998. https://doi.org/10.3168/jds.S0022-0302(98)75704-2. Lapp HM, Schulte DD, Sparling AB, Buchanan LC. Methane production from animal wastes. I. fundamental considerations. Can Agric Eng. 1975;17:97–102. Levy H. Normal atmosphere: large radical and formaldehyde concentrations predicted. Science. 1971. https://doi.org/10.1126/science.173.3992.141. Liu Z, Powers W. Greenhouse gases emissions from multi-species animal operations and potential diet effects. Trans ASABE. 2014. https://doi.org/10.13031/trans.57.10246. Lynch J, Cain M, Pierrehumbert R, Allen M. Demonstrating GWP*: a means of reporting warming-equivalent emissions that captures the contrasting impacts of short- and long-lived climate pollutants. Environ Res Lett. 2020. https://doi.org/10.1088/1748-9326/ab6d7e. Manne AS, Richels RG. An alternative approach to establishing trade-offs among greenhouse gases. Nature. 2001. https://doi.org/10.1038/35070541. Martin C, Morgavi DP, Doreau M. Methane mitigation in ruminants: from microbe to the farm scale. Animal. 2010. https://doi.org/10.1017/S1751731109990620. McMahon J. Meat and agriculture are worse for the climate than power generation, Steven Chu says. Forbes; 2019. https://www.forbes.com/sites/jeffmcmahon/2019/04/04/meat-and-agriculture-are-worse-for-the-climate-than-dirty-energy-steven-chu-says/#1bd0abf911f9. Accessed 8 Aug 2020. Morgavi DP, Forano E, Martin C, Newbold CJ. Microbial ecosystem and methanogenesis in ruminants. Animal. 2010. https://doi.org/10.1017/S1751731110000546. Myhre G, Shindell D, Bréon FM, Collins W, Fuglestvedt J, Huang J, et al. Anthropogenic and natural radiative forcing. In: Stocker TF, Qin D, Plattner GK, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM, editors. Climate change 2013: the physical science basis. contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change. Geneva: IPCC; 2013a. http://www.climatechange2013.org/. Myhre G, Shindell D, Bréon FM, Collins W, Fuglestvedt J, Huang J, et al. Anthropogenic and natural radiative forcing supplementary material; 2013b. http://www.climatechange2013.org/. Naranjo A, Johnson A, Rossow H, Kebreab E. Greenhouse gas, water, and land footprint per unit of production of the California dairy industry over 50 years. J Dairy Sci. 2020. https://doi.org/10.3168/jds.2019-16576. NASA. NOAA analyses reveal 2019 second warmest year on record; 2020. https://www.nasa.gov/press-release/nasa-noaa-analyses-reveal-2019-second-warmest-year-on-record. Accessed 8 Aug 2020. Newbold CJ, Rode LM. Dietary additives to control methanogenesis in the rumen. Int Congr Ser. 2006. https://doi.org/10.1016/j.ics.2006.03.047. O'Gorman PA. Precipitation extremes under climate change. Curr Clim Change Rep. 2015. https://doi.org/10.1007/s40641-015-0009-3. Orde E. Elena Orde writes on the anniversary of the UN's report "Livestock's Long Shadow" about what, if anything, has changed since animal farming was identified as a major cause of environmental devastation. The Vegan Society; 2016. https://www.vegansociety.com/whats-new/blog/livestock's-long-shadow-–-ten-years. Accessed 8 Aug 2020. Petersen SO, Blanchard M, Chadwick D, Del Prado A, Edouard N, Mosquera J, Sommer SG. Manure management for greenhouse gas mitigation. Animal. 2013. https://doi.org/10.1017/S1751731113000736. Pierrehumbert RT. Short-lived climate pollution. Annu Rev Earth Planet Sci. 2014. https://doi.org/10.1146/annurev-earth-060313-054843. Plattner GK, Knutti R, Joos F, Stocker TF, von Bloh W, Brovkin V, et al. Long-term climate commitments projected with climate-carbon cycle models. J Clim. 2008. https://doi.org/10.1175/2007JCLI1905.1. Ramanathan V, Xu Y. The Copenhagen accord for limiting global warming: Criteria, constraints, and available avenues. Proc Natl Acad Sci USA. 2010. https://doi.org/10.1073/pnas.1002293107. Sahade R, Lagger C, Torre L, Momo F, Monien P, Schloss I, et al. Climate change and glacier retreat drive shifts in an Antarctic benthic ecosystem. Sci Adv. 2015. https://doi.org/10.1126/sciadv.1500050. Saunois M, Bousquet P, Poulter B, Peregon A, Ciais P, Canadell JG, et al. The global methane budget 2000–2012. Earth Syst Sci Data; 2016. https://doi.org/10.5194/essd-8-697-2016. Saunois M, Stavert AR, Poulter B, Bousquet P, Josep G, Jackson RB, et al. The global methane budget: 2000–2017. Earth Syst Sci Data. Papers in open discussion; 2019. https://doi.org/10.5194/essd-2019-128. Schleussner CF, Nauels A, Schaeffer M, Hare W, Rogelj J. Inconsistencies when applying novel metrics for emissions accounting to the Paris agreement. Environ Res Lett. 2019. https://doi.org/10.1088/1748-9326/ab56e7. Shibata M, Terada F. Factors affecting methane production and mitigation in ruminants. Anim Sci J. 2010. https://doi.org/10.1111/j.1740-0929.2009.00687.x. Shoemaker JK, Schrag DP, Molina MJ, Ramanathan V. What role for short-lived climate pollutants in mitigation policy? Science. 2013. https://doi.org/10.1126/science.1240162. Shook GE. Major advances in determining appropriate selection goals. J Dairy Sci. 2006. https://doi.org/10.3168/jds.S0022-0302(06)72202-0. Steinfeld H, Gerber P, Wassenaar T, Castel V, Rosales M, de Haan C. Livetsocks's long shadow. Rome: FAO; 2006. http://www.fao.org/3/a0701e/a0701e00.htm. Accessed 8 Aug 2020. Tapio I, Snelling TJ, Strozzi F, Wallace RJ. The ruminal microbiome associated with methane emissions from ruminant livestock. J Anim Sci Biotechnol. 2017. https://doi.org/10.1186/s40104-017-0141-0. UNFCCC. The Paris Agreement; 2016. https://unfccc.int/process-and-meetings/the-paris-agreement/the-paris-agreement. Accessed 8 Aug 2020. USDA. 2019 State Agriculture Overview: California; 2020. https://www.nass.usda.gov/Quick_Stats/Ag_Overview/stateOverview.php?state=CALIFORNIA. Accessed 8 Aug 2020. World Meteorological Organization. The global climate in 2015–2019. Geneva: WMO; 2020. https://public.wmo.int/en/media/press-release/global-climate-2015-2019-climate-change-accelerates. Accessed 8 Aug 2020. Wyrzykowski Ł, Reincke K, Hemme T. IFCN long-term dairy outlook. Kiel: IFCN; 2018. https://ifcndairy.org/wp-content/uploads/2018/06/IFCN-Dairy-Outlook-2030-Article-1.pdf. Accessed 8 Aug 2020. The funding was provided by the University of California, Agriculture Experiment Station. CLEAR Center, Department of Animal Science, University of California, Davis, CA, 95616, USA Shule Liu, Joe Proudman & Frank M. Mitloehner Shule Liu Joe Proudman Frank M. Mitloehner SL analyzed the data and wrote the manuscript. JP provided critical comments and assisted in revising the manuscript. FM guided this study and proposed the core viewpoints of this manuscript. All authors read and approved the final manuscript. Correspondence to Frank M. Mitloehner. All authors have consented to this submission The funders listed above had no role in the composition or writing of the manuscript, or in the decision to publish. Liu, S., Proudman, J. & Mitloehner, F.M. Rethinking methane from animal agriculture. CABI Agric Biosci 2, 22 (2021). https://doi.org/10.1186/s43170-021-00041-y Accepted: 14 May 2021 GWP*
CommonCrawl
Endophytic fungi protect tomato and nightshade plants against Tuta absoluta (Lepidoptera: Gelechiidae) through a hidden friendship and cryptic battle Ayaovi Agbessenou1,2, Komivi S. Akutse1, Abdullahi A. Yusuf2,3, Sunday Ekesi1, Sevgan Subramanian1 & Fathiya M. Khamis1 Scientific Reports volume 10, Article number: 22195 (2020) Cite this article Biological techniques Endophytic fungi live within plant tissues without causing any harm to the host, promote its growth, and induce systemic resistance against pests and diseases. To mitigate the challenging concealed feeding behavior of immature stages of Tuta absoluta in both tomato (Solanum lycopersicum) and nightshade (Solanum scabrum) host plants, 15 fungal isolates were assessed for their endophytic and insecticidal properties. Twelve isolates were endophytic to both host plants with varied colonization rates. Host plants endophytically-colonized by Trichoderma asperellum M2RT4, Beauveria bassiana ICIPE 706 and Hypocrea lixii F3ST1 outperformed all the other isolates in reducing significantly the number of eggs laid, mines developed, pupae formed and adults emerged. Furthermore, the survival of exposed adults and F1 progeny was significantly reduced by Trichoderma sp. F2L41 and B. bassiana isolates ICIPE 35(4) and ICIPE 35(15) compared to other isolates. The results indicate that T. asperellum M2RT4, B. bassiana ICIPE 706 and H. lixii F3ST1 have high potential to be developed as endophytic-fungal-based biopesticide for the management of T. absoluta. Vegetable production is one of the most viable horticultural sub-sector in Africa and is considered an important route out of poverty for smallholder farmers1. Tomato (Solanum lycopersicum L.; Solanaceae) is one of the most promising vegetable for horticultural expansion in Africa but the crop is experiencing significant losses due to abiotic and biotic stressors threatening the livelihoods of millions of smallholder farmers2. Among the biotic factors, the invasive tomato leafminer Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae), which originated from South America and spread as far as Europe3 has emerged as one of the most important devastating pest of tomato during the last decade, contributing to increasing risk of malnutrition and food insecurity in Africa. In addition to tomato crop, the pest also attacks various cultivated and wild plants within the Solanaceae family such as pepper, Capsicum annuum L.; eggplant, Solanum melongena L.; tobacco, Nicotiana tabacum L.; potato, Solanum tuberosum L. and black nightshade, Solanum nigrum L.4. In Kenya, both tomato and nightshade crops are the most preferred hosts for the tomato leafminer with high infestation causing up to 100% yield losses on tomato4. Estimates of the economic losses due to this pest reaches as high as US$ 59.3 million annually5. Ovipositing female lays eggs on the upper surface of tomato leaves which hatch after four to five days. Neonate larvae penetrate the leaf and feed on the mesophyll resulting in the production of thin and irregular mines on the leaf surface compromising the photosynthetic activity of the plant that negatively affect crop productivity or yield6. Mature larvae bore into the tomato leaves, fruits and flowers, spending most of their lifespan inside the crop than outside7. This concealed feeding behavior allows the pest to escape from most of the synthetic insecticides currently being applied hindering management of the pest. The resultant high use of synthetic pesticides causes significant short- and long-term adverse environmental and human health effects and increased resistance development in T. absoluta8; emphasizing therefore the need to promote environmentally-friendly control methods to curtail these problems. As a viable alternative to the use of synthetic insecticides, the development of biological control approaches using entomopathogenic fungi has shown promising results as they cause high mortality to insect pests of economic importance9,10,11,12. Akutse et al.13 reported the potential of fungal pathogens to control the pest and subsequently identified three Metarhizium anisopliae (Metschnikoff) Sorokin strains (ICIPE 18, ICIPE 20 and ICIPE 655) as candidate biopesticides causing mortality of 95.0, 87.5 and 86.25%, respectively against the adult stage of the pest. Entomopathogenic fungi have been traditionally used to control insect pests mostly through inundative application14. But recent studies have begun to examine their activity as plant endophytes to systemically protect plants against herbivorous insect pests15 and are therefore best suited to target the cryptic stages of T. absoluta such as larvae and pupae16. Endophytic fungi are symptomless microbial organisms that live within host plant tissues either naturally or through artificial inoculation without causing any outward harm to the host17. Some of the advantages of using endophytic fungi compared to other biocontrol agents reside in the fact that, they are less exposed to the effect of environmental stresses and require little inoculum for its systemic delivery within the host plant tissues18. In some cases, these ubiquitous fungi play an important role as plant growth promoters participating therefore in the acquisition of nutrients by the plants19. Although several inoculation methods have been reported to be effective in delivering the inoculum at the target site, insecticidal seed treatment has been termed as the most convenient, very safe and cost-effective inoculation method for successful endophytic colonization of many crop plants20. Consequently, using this delivery technique, host-adapted endophytes have been successfully established in tomatoes21, Faba bean22,23, maize24 and cotton21. Upon plant colonization, endophytic fungi help their host plants to perform better under stressful environmental conditions (drought) and withstand biotic stressors (pathogens and herbivores) through the induction of local or systemic resistance, antibiosis, phytohormones production and the stimulation of plant secondary metabolites25,26. In an attempt to improve the management of the pea leafminer Liriomyza huidobrensis (Blanchard) (Diptera: Agromyzidae), Akutse et al.23 and Gathage et al.27 reported that through seed inoculation, endophytic fungi could successfully colonize Faba beans plant tissues and cause significant suppression of the pest. A similar study by Muvea et al.28 reported on the establishment of endophytic fungi within onion plant and the ability of these microorganisms in reducing the population of onion thrips, Thrips tabaci Lindeman (Thysanoptera: Thripidae) on inoculated plants. Similarly, tomato seeds pre-treated with the endophytic fungi Beauveria bassiana (Balsamo-Criv.) Vuillemin reduced larval performance of Helicoverpa zea (Boddie) (Lepidoptera: Noctuidae)29. Recently, Klieber and Reineke30 revealed that endophytic fungi inoculated in tomato plants mediated systemic resistance against T. absoluta and played a significant role in reducing feeding activity of the immature stage of the pest. This pest control strategy has added a new dimension to the use of fungal entomopathogens against cryptic insect pests whose life cycle limits the effectiveness of chemical insecticides and other control methods18,31. Therefore, to tackle the concealed feeding behavior of the larval stage of T. absoluta, the objective of this research was to assess the endophytic properties of fifteen fungal isolates on both tomato and nightshade plants and evaluate their insecticidal activity or pathogenicity with their ability to induce systemic resistance against the pest with the aim to use the potent fungal endophytic-based biopesticide as a component of a Tuta-IPM. Endophytic colonization of tomato and nightshade by fungal isolates The results of viability tests showed that conidia germination of the different fungal isolates used in this study exceeded 90% after 18 h of incubation. Endophytic colonization rate was determined by the recovery of the inoculated fungal strains from the roots, stems and leaves, respectively. The 15 fungal isolates differed markedly in their ability to colonize both tomato and nightshade plants. Irrespective of the host plants, M. anisopliae isolates ICIPE 30, ICIPE 69 and ICIPE 7 failed to colonize the various plant parts while the remaining 12 isolates were successfully recovered from tomato and nightshade host plant parts (Fig. 1A,B). However, colonization of the different plant tissues (roots, stems and leaves) varied depending on fungal isolates and host plants. For example, isolates of F. proliferatum F2S51, Trichoderma sp. F2L41, T. atroviride F2S21, H. lixii F3ST1, B. bassiana ICIPE 35(4), ICIPE 273, ICIPE 706 and T. asperellum M2RT4 colonized roots, stems and leaves of both host plants. Beauveria bassiana ICIPE 35(15) colonized the roots, stems, and leaves of tomato plants while it colonized only the roots and stems of nightshade (Fig. 1A,B). It is worth noting that; B. bassiana ICIPE 35(12), ICIPE 35(6) and ICIPE 279 colonized only roots and stems of both host plants. In addition, H. lixii F3ST1 and T. asperellum M2RT4 colonized more than 85% of all the plant tissues of both host plants while B. bassiana ICIPE 706 colonized 60, 40 and 15% of roots, stems and leaves of tomato, respectively (Fig. 1A); and 70, 35 and 15% of roots, stems and leaves of nightshade, respectively (Fig. 1B). Trichoderma atroviride F2S21 successfully colonized 100, 100 and 75% of roots, stems and leaves of tomato plant respectively, and 100, 95 and 55% of roots, stems and leaves in nightshade, respectively (Fig. 1A,B). Significant differences in colonization by isolates were observed on roots (χ2 = 112.31, df = 11, P < 0.0001), stems (χ2 = 204.36, df = 11, P < 0.0001) and leaves (χ2 = 279.74, df = 11, P < 0.0001) of tomato (Fig. 1A). Similarly, significant differences were observed in colonization levels of plant parts of nightshade: roots (χ2 = 114.17, df = 11, P < 0.0001), stems (χ2 = 131.89, df = 11, P < 0.0001) and leaves (χ2 = 297.73, df = 11, P < 0.0001) (Fig. 1B). Endophytic colonization of tomato Solanum lycopersicum (A) and nightshade Solanum scabrum (B) host plant parts by 15 fungal isolates at 4–5 weeks post-inoculation. Bar chart represents means ± SE (standard error) at 95% CI (P < 0.05; n = 4). Effect of endophytically-colonized tomato and nightshade host plants on survival of adult Tuta absoluta The survival of T. absoluta adults exposed to endophytically-colonized tomato plants varied significantly among the treatments (Proximate log rank test, χ2 = 168.5, df = 9, P < 0.0001). For example, at day 5 post-exposure, mean adult survival was 28.21% with B. bassiana ICIPE 273 and 32.69% with F. proliferatum F2S51 compared to 52.28% in the control (Fig. 2A). At day 10 post-exposure, mean adult survival ranged between 9.2 and 26.40% including the control, except for B. bassiana ICIPE 706 (30.80%). At day 15 post-exposure, the survival was less than 10% including the control. At day 20 post-exposure, no survival was observed in all the treatments including the control (Fig. 2A). Similarly, there was a significant difference in the survival of T. absoluta adults exposed to endophytically-colonized nightshade plants (Proximate log rank test, χ2 = 82.79, df = 9, P < 0.0001) compared to the control. At day 5 post-exposure, mean adult survival was between 39 and 54% including the control (Fig. 2B). At day 10 post-exposure, mean adult survival ranged between 10 and 26.6% including the control. At day 15 post-exposure, no survival was observed in T. asperellum M2RT4 while mean adult survival was below 10% in all the treatments including the control (Fig. 2B). Effect of endophytically-colonized host plants by fungal isolates on survival of adult Tuta absoluta: (A) Kaplan–Meier survival curves of Tuta absoluta adults exposed to endophytically-colonized tomato plants, (B) Kaplan–Meier survival curves of Tuta absoluta adults exposed to endophytically-colonized nightshade plants (P < 0.05, n = 4). Effect of endophytically-colonized tomato and nightshade host plants on oviposition and leafmining of Tuta absoluta The number of eggs laid on endophytically-colonized tomato plants varied significantly among the treatments (χ2 = 208.92, df = 9, P < 0.0001) (Fig. 3A). For instance, T. asperellum M2RT4 endophytically-colonized tomato plants recorded the lowest number of eggs (30.0 ± 4.51 eggs), followed by B. bassiana ICIPE 706 (31.25 ± 5.88 eggs), H. lixii F3ST1 with 63.25 ± 2.66 eggs and T. atroviride F2S21 with 63.5 ± 7.63 eggs, compared to 111.0 ± 13.32 eggs in the control (Fig. 3A). However, the highest number of eggs was recorded on B. bassiana ICIPE 273 (228.75 ± 24.36 eggs), followed by F. proliferatum F2S51 (177.0 ± 15.96 eggs), Trichoderma sp. F2L41 with 142.0 ± 27.67 eggs and the control (111.0 ± 13.32 eggs) (Fig. 3A). Upon egg hatching, T. asperellum M2RT4-endophytically-colonized tomato plants recorded the lowest number of mines (24.0 ± 5.4 mines) while B. bassiana ICIPE 273 recorded the highest number of mines (219.0 ± 20.92 mines) followed by F. proliferatum F2S51 (173.5 ± 15 mines), Trichoderma sp. F2L41 with 137.0 ± 24.47 mines, compared to 107.33 ± 13.32 mines in the control (χ2 = 216.4, df = 9, P < 0.0001) (Fig. 3B). Effect of endophytically-colonized host plants by fungal isolates on oviposition and leafmining of Tuta absoluta at 48 h post-exposure. (A) Bar chart showing mean number (± SE) of Tuta absoluta eggs laid on endophytically-colonized tomato plants. (B) Bar chart showing mean number (± SE) of mines produced by Tuta absoluta on endophytically-colonized tomato plants. (C) Bar chart showing mean number (± SE) of Tuta absoluta eggs laid on endophytically-colonized nightshade plants. (D) Bar chart showing mean number (± SE) of mines produced by Tuta absoluta on endophytically-colonized nightshade plants. Means followed by a different lowercase letters are significantly different (P < 0.05; n = 4; Tukey's HSD test). Similarly, endophytically-colonized nightshade plants had a significant effect on the oviposition of T. absoluta (χ2 = 91.73, df = 9, P < 0.0001) (Fig. 3C). Among the fungal isolates, the lowest number of eggs was laid on T. asperellum M2RT4 endophytically-colonized nightshade plants (33.25 ± 3.97 eggs) compared to 109.33 ± 23.31 eggs in the control (Fig. 3C). However, the highest number of eggs (162.25 ± 20.01 eggs) was recorded on B. bassiana ICIPE 273, followed by Trichoderma sp. F2L41 with 111.75 ± 21.85 eggs and B. bassiana ICIPE 706 (104.25 ± 11.38 eggs), compared to 109.33 ± 23.31 eggs in the control (Fig. 3C). Subsequently, following egg hatchability, the lowest number of mines (24.5 ± 5.55 mines) was recorded on T. asperellum M2RT4 endophytically-colonized nightshade plants while the highest was recorded on B. bassiana ICIPE 273 (155.0 ± 19.94 mines) followed by Trichoderma sp. F2L41 (108.5 ± 22.02 mines) and the control (107.33 ± 23.31 mines) (χ2 = 110.95, df = 9, P < 0.0001) (Fig. 3D). Effect of endophytically-colonized tomato and nightshade host plants on Tuta absoluta pupation and adult emergence The pupation of T. absoluta larvae that survived was significantly affected (χ2 = 131.45, df = 9, P < 0.0001) by the endophytically-colonized tomato plants (Fig. 4A). In endophytically-colonized tomato plants, fewer T. absoluta pupae (20.75 ± 4.05 pupae) were produced in B. bassiana ICIPE 706 followed by T. asperellum M2RT4 (21.25 ± 5.22 pupae) which were significantly different from F. proliferatum F2S51 (151.25 ± 23.92 pupae) and the control (103.67 ± 12.55 pupae) (Fig. 4A). Further, T. absoluta adult emergence varied significantly among the fungal isolates (χ2 = 58.34, df = 9, P < 0.01), where the highest number of moths (148.0 ± 24.57) emerged from F. proliferatum F2S51 endophytically-colonized tomato plants, followed by the control 101.67 ± 11.46 while the lowest number (17.0 ± 6.34 moths) was recorded on T. asperellum M2RT4 endophytically-colonized tomato plants (Fig. 4B). Effect of endophytically-colonized host plants by fungal isolates on Tuta absoluta pupation and adult emergence. (A) Bar chart showing mean number (± SE) of Tuta absoluta pupae produced on endophytically-colonized tomato plants. (B) Bar chart showing mean number of Tuta absoluta adults emerging from endophytically-colonized tomato plants. (C) Bar chart showing mean number (± SE) of Tuta absoluta pupae produced on endophytically-colonized nightshade plants. (D) Bar chart showing mean number of Tuta absoluta adults emerging from endophytically-colonized nightshade plants. Means followed by a different lowercase letters are significantly different (P < 0.05; n = 4; Tukey's HSD test). Pupal formation was significantly different among the treatments (χ2 = 90.95, df = 9, P < 0.0001) where the highest number was obtained in the control (102.33 ± 22.93 pupae) and the lowest (19 ± 4.12 pupae) was recorded in T. asperellum M2RT4 endophytically-colonized nightshade plants (Fig. 4C). Further, the number of adults that emerged from the control (99.33 ± 22.98 moths) was significantly higher than the lowest number (15.5 ± 3.2 moths) that was obtained in T. asperellum M2RT4 endophytically-colonized nightshade plants (χ2 = 44.99, df = 9, P < 0.0001) (Fig. 4D). Effect of endophytically-colonized tomato and nightshade host plants on Tuta absoluta F1 progenies survival The median survival time of F1 progenies from the endophytically-colonized tomato plants varied significantly among the treatments (Proximate log rank test, χ2 = 180.7, df = 9, P < 0.0001) (Fig. 5A). At day 5 post emergence, mean survival was between 15.6 and 24% in Trichoderma sp. F2L41, B. bassiana ICIPE 35(4), B. bassiana ICIPE 35(15) and F. proliferatum F2S51 compared to 58.16% in the control (Fig. 5A). At day 10 post emergence, there was no survival in Trichoderma sp. F2L41, B. bassiana ICIPE 35(4), B. bassiana ICIPE 35(15) while it was between 7 and 28% in other treatments including the control (Fig. 5A). Similarly, the survival of F1 progeny from endophytically-colonized nightshade plants revealed a significant difference among treatments (Proximate log rank test, χ2 = 128.9, df = 9, P < 0.0001) (Fig. 5B). At day 5 post emergence, mean survival was between 17 and 29% in Trichoderma sp. F2L41, B. bassiana ICIPE 35(4), B. bassiana ICIPE 35(15) and F. proliferatum F2S51 compared to 51.23% in the control (Fig. 5B). At day 10 post emergence, there was no survival in Trichoderma sp. F2L41, B. bassiana ICIPE 35(4), B. bassiana ICIPE 35(15) and F. proliferatum F2S51 while it ranged between 11 and 19% in other treatments including the control (Fig. 5B). Effect of endophytically-colonized host plants by fungal isolates on Tuta absoluta F1 progenies survival. (A) Kaplan–Meier survival curves of Tuta absoluta F1 progenies survival emerging from endophytically-colonized tomato plants. (B) Kaplan–Meier survival curves of Tuta absoluta F1 progenies survival emerging from endophytically-colonized nightshade plants (P < 0.05, n = 4). This study demonstrated successful endophytic colonization and establishment of some fungal isolates in tomato and nightshade host plants by negatively affecting T. absoluta through significant reduction of the pest oviposition capacity, leafmining, pupal formation, adult emergence and survival. Trichoderma asperellum M2RT4, B. bassiana ICIPE 706 and H. lixii F3ST1-endophytically-colonized host plants outperformed all the other endophytes in affecting all the life-history parameters of the pest and could therefore contribute to its suppression in tomato and other solanaceous crops. Among the 15 fungal isolates tested, 12 were endophytic to both host plants with varying colonization rates while M. anisopliae isolates failed to colonize both host plants. Irrespective of the host plant, fungal isolates belonging to the genera Fusarium (F. proliferatum F2S51), Trichoderma (T. asperellum M2RT4, T. atroviride F2S21 and Trichoderma sp. F2L41) and Hypocrea (H. lixii F3ST1) have demonstrated high colonization rates of all tomato and nightshade plant tissues. These fungal isolates except for Trichoderma sp. F2L41 had a similar in planta colonization pattern in onion through seed inoculation as previously reported by Muvea et al.28. Mutune et al.32 also reported the potential of T. asperellum M2RT4, T. atroviride F2S21 and H. lixii F3ST1 to endophytically-colonize different parts of the common bean plant Phaseolus vulgaris L. (Fabaceae). This implies that the recovery of endophytic fungi from plant tissues (leaves, stems and roots) after seed inoculation is an indication of their ascending movement within the plant33. Previously, such systemic spread of endophytes within the plant has been reported to occur in several crops such as maize34, Vicia faba and P. vulgaris23,35, tomatoes29, bananas36 and coffee37. Some endophytic fungi have also been reported to display a differential ability to colonize and multiply in the root cortex of different plant species while others establish in the whole plant tissues38. Recently, a survey conducted on the prevalence and distribution of fungal root endophytes occurring in tomato crop in Kenya by Bogner et al.39 found that the most prevalent endophytic fungi associated with tomato roots were members of Fusarium and Trichoderma genera. This confirms observations of Hardoim et al.40 who reported that members of these two genera have the potential to colonize a wide range of hosts, suggesting their great metabolic and physiological adaptability. In contrast to the successfully high colonization rates of plant tissues by Fusarium, Trichoderma and Hypocrea, the level of colonization of B. bassiana isolates varied according to the various plant tissues with low colonization rate found in the leaves. A probable explanation of the low recovery of B. bassiana from the aerial tissues could be due to the speed of colonization (inoculum migration) or the presence of physical barriers in the leaf which prevent the fungus from penetrating the epidermis which may contain some substances inimical to the growth of the fungus33,41. However, there are several research evidence that reported the ability of B. bassiana to colonize a wide range of plants belonging to the monocot and dicot groups including banana36, V. faba42, opium43, maize44, cassava45, tomato29 and coffee33. Nonetheless, this underscores the lack of host specificity expressed by this fungal species in both host seedlings46. In this study, the highest recovery of B. bassiana was from the roots of the plants which indicates that through seed inoculation this strain has gained access to the cells of the plant. This confirms the observation that many endophytic fungi originate from the rhizosphere microbiota, an environment which attracts microorganisms better due to the presence of root exudates and rhizodeposits47. On the other hand, Behie et al.15 reported that endophytic fungi may display preferential tissue colonization within their host plants owing to many factors, including plant tissue type, plant genotype, microbial taxon and strain type40. Even though M. anisopliae isolates ICIPE 7, ICIPE 30 and ICIPE 69 were reported to be pathogenic to several arthropod pests of economic importance48, their failure in colonizing both host plants indicates their inability to establish themselves in living plant tissues of tomato and nightshade. Similar results have been reported on other host plants such as French bean and Faba bean23,32. Additionally, perhaps not all insect-pathogenic fungi have the ability to establish themselves as endophytes in living plant tissues49. However, numerous studies have documented the ability of Metarhizium spp. to colonize plant roots providing multiple benefits to their host plants45,50,51. In general, our results reveal that exposure of both endophytically-colonized host plants to ovipositing T. absoluta females has resulted in a significant reduction in the number of eggs laid on the inoculated plant compared to the control. Among the most potent endophytic fungal isolates, we found that T. asperellum M2RT4, B. bassiana ICIPE 706 and H. lixii F3ST1 significantly reduced oviposition of the pest. For instance, Muvea et al.52 demonstrated a sixfold reduction in oviposition of onion thrips on plants endophytically-colonized by H. lixii F3ST1 compared to endophyte-free plants. Also, Akutse et al.23 reported that Faba bean endophytically-colonized by H. lixii had significant effect on the egg-laying capacity of the pea leafminer L. huidobrensis. It is worth noting that the female's choice to reduce egg production could be due to the absence of favorable conditions that would compromise the survival of the progeny53. Furthermore, T. asperellum M2RT4 negatively affected leafmining activity as well as pupation and adult emergence. When the hatching larvae feed on inoculated tissue, it generally results in a decreased fitness of the herbivore54. This corroborates with Akutse et al.23 who reported that endophytic fungi provide systemic protection against the pea leafminer L. huidobrensis and deterrent effects on life-history parameters of the pest. In addition, several studies have also reported insecticidal activities of endophytic fungi against insects feeding on endophytically-colonized plants through antibiosis or feeding deterrence, suggesting that immature larvae were probably affected through the secretion of toxic compounds in planta30,45,55,56,57. The inhibition of the larval performance due to the presence of Trichoderma spp. within the host plants has previously been reported58. The systemic activity of this fungal isolate as one of the most potent endophytic fungal strain controlling T. absoluta was not surprising, since similar effects have been reported in previous studies by Akello and Sikora22 and Muvea et al.28 on aphids and thrips population, respectively. The latter indicated that onion thrips feeding on onion plants inoculated by Trichoderma spp. performed worse and few immature stages reached the adult stage compared to the control. This suggests that T. asperellum M2RT4 possesses specific properties that trigger plant resistance which results in significant reduction of insect herbivory59. Similarly, Coppola et al.60 reported an enhancement of the indirect defense barriers against the aphid Macrosiphum euphorbiae (Hemiptera: Aphididae) feeding on tomato plants colonized by T. harzianum T22. On the other hand, we found that B. bassiana fungal isolates (ICIPE 706 and ICIPE 273) reduced leafmining activity as well as pupation although showing low level in planta colonization pattern. However, B. bassiana isolate ICIPE 706 had the highest negative impact on the pest oviposition, pupation and adult emergence in both host plants, while it reduced significantly the mines formation only in tomato. Since T. absoluta larvae continue to feed on inoculated plants after egg hatching due to their cryptic nature, the amount and quality of host diet could significantly affect the feeding behavior of the leafmining larvae. It is therefore possible that this low colonization level was sufficient for the plants to initiate a defense reaction61. Klieber and Reineke30 reported that T. absoluta larvae experienced detrimental effects when feeding on tomato leaves infected with B. bassiana. Lewis et al.62 also demonstrated that when B. bassiana remains in the maize plant as endophyte, it provides a season-long management of the European corn borer, Ostrinia nubilalis (Hübner) (Lepidoptera: Crambidae) through the reduction of the larval activity of the pest. Qayyum et al.63 reported that endophytic colonization of B. bassiana has potential as an effective strategy to control Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae) in tomatoes. Not all the fungal isolates tested in this study were able to deter the oviposition behavior against T. absoluta. Of the tested isolates, three (B. bassiana ICIPE 273, F. proliferatum F2S51 and Trichoderma sp. F2L41) recorded high number of eggs compared to other treatments and the control (endophyte-free tomato and nightshade plants). Jensen et al.42 also found an increased fecundity of the second generation of Aphis fabae on V. faba plants following seed and leaf inoculation with B. bassiana. The authors further speculated that B. bassiana is responsible for the improvement of the quality of the host plant which have led the insects to increase the number of eggs laid on the inoculated plants. Similarly, Jallow et al.64 examined in tomato the systemic effects of the endophytic fungus Acremonium strictum on the oviposition behavior of the polyphagous moth Helicoverpa armigera (Hübner). The authors reported that strains of H. armigera moths oviposited more eggs on leaves of A. strictum-inoculated plants as compared to endophyte-free plants. Later, Jaber and Vidal65 suggested that the increased oviposition preference of H. armigera moths to inoculated plants might be an evolutionary adaptation to the host plant. Although we have not investigated/established the mechanism by which these three isolates (B. bassiana ICIPE 273, F. proliferatum F2S51 and Trichoderma sp. F2L41) increased the attractiveness to the two host plants for egg-laying in T. absoluta, our results suggest that secondary metabolites or microbial volatile organic compounds produced by these endophytes or the interaction of the plants with the fungi may play a role in influencing the host selection of T. absoluta for oviposition66. The difference in the number of eggs laid on the several inoculated plants is suggestive of chemical and/or molecular mechanism(s) mediating interaction between the endophytes, insect and its host plants, calling for further studies. The results reported here showed that females exposed to both tomato and nightshade intact plants lived less than 20 days. This finding is in agreement with Silva et al.67 who reported that females T. absoluta had a lifespan less than 20 days. However, our result is in contrast with Pereyra and Sanchez68 who reported that the survival of T. absoluta individuals could be extended until day 45 and remained high most of the lifetime but start decreasing to 50% at day 25. These variations might be due to the experimental conditions or the food source provided to the emerged adults during the survival bioassays. Further, we found a rapid decline in the survival rates of T. absoluta F1 progenies that emerged from larvae that fed on Trichoderma sp. F2L41, B. bassiana ICIPE 35(4), B. bassiana ICIPE 35(15) endophytically-colonized host plants. Our results concur with Dash et al.69 who also found a reduction of the survival of adult spider mites whose larvae fed on endophytically-colonized bean plants. Akello et al.70 reported an antagonistic activity mediated by the endophytic fungus B. bassiana towards the banana weevil adult, Cosmopolites sordidus (Coleoptera: Curculionidae). However, we did not record any sign of fungal infection on the dead insects which suggests that a probable mechanism of systemic resistance or feeding deterrence would be the factor responsible for the adverse effect of the inoculated plants on adult survivorship. Such deterrence exhibited by inoculated plants is related to the production of secondary metabolites by some fungi which may be an interesting exploitable feature for their sustainable use against agricultural insect pests of economic importance71. In this study, we have identified T. asperellum M2RT4, B. bassiana ICIPE 706 and H. lixii F3ST1 as the most potent endophytic fungal isolates mediating improvement of tomato and nightshade anti-herbivore defense against T. absoluta through the reduction of adult oviposition, leafmining, pupation and adult emergence as compared to other treatments. Trichoderma asperellum M2RT4, B. bassiana ICIPE 706 and H. lixii F3ST1 could therefore be considered the best candidates for development of endophytic-based biopesticide and could be integrated as a component in a sustainable integrated T. absoluta management strategy for tomato and nightshade production systems. However, further studies are warranted to clearly understand the underlying mechanisms by which the presence of endophytic fungi within tomato and nightshade host plants affect T. absoluta as well as validate the findings under field conditions. Fungal cultures Fifteen fungal isolates belonging to five different genera (Beauveria (7), Fusarium (1), Hypocrea (1), Metarhizium (3) and Trichoderma (3)), obtained from the International Centre of Insect Physiology and Ecology (icipe)'s Arthropod Pathology Unit Germplasm, were used in this study (Table 1). These isolates were cultured on potato dextrose agar (PDA) (OXOID CM0139, Oxoid Ltd., Basingstoke, UK), except for Metarhizium which were cultured on Sabouraud dextrose agar (SDA) (OXOID CM0041, Oxoid Ltd., Basingstoke, UK), and maintained at 25 ± 2 °C in complete darkness. Conidia were harvested by scraping the surface of two to three-week-old sporulated cultures using a sterile spatula. The harvested conidia were then suspended in 10 mL sterile distilled water containing 0.05% Triton X-100 (MERCK KGaA, Darmstadt, Germany) and vortexed for 5 min at about 700 rpm to break conidial clumps and ensure a homogenous suspension23,28. Conidial concentrations were quantified using an improved Neubauer hemocytometer under a light microscope72. The conidial suspension was adjusted to a concentration of 1 × 108 conidia mL−1 through serial dilution prior to inoculation of tomato and nightshade seeds. Table 1 List of fungal isolates used in this study. Prior to commencement of the bioassays, spore viability was determined by plating evenly 0.1 mL of 3 × 106 conidia mL-1 onto 9-cm Petri dishes containing SDA or PDA. Three sterile microscope cover slips (2 × 2 cm) were placed randomly on the surface of each inoculated plate. Plates were sealed with Parafilm and incubated in complete darkness at 25 ± 2 °C and were examined after 16–20 h. The percentage germination of conidia was determined from 100 randomly selected conidia on the surface area covered by each cover slip under a light microscope (×400) using the method described by Goettel and Inglis72. Conidia were considered to have germinated when the length of the germ tube was at least twice the diameter of the conidium72. Four replicates were used for each isolate. Seed inoculation and colonization assessment of endophyte isolates Tomato (Solanum lycopersicum L. cv. "Moneymaker") and nightshade (Solanum scabrum Mill cv. "Giant nightshade") seeds (Simlaw Seeds Company Ltd., Nairobi, Kenya) were surface-sterilized by washing them up successively in 70% ethanol for 2 min followed by 1.5% sodium hypochlorite for three (3) min and finally rinsed three times in sterile distilled water. The surface sterilized seeds were placed on sterile filter paper on a clean working surface in a cabinet until the residual water evaporated. Effectiveness of the surface sterilization technique was confirmed by plating out 0.1 mL of the last rinse water onto potato dextrose agar and also imprinting of surface sterilized seeds onto PDA (tissue imprint) supplemented with 100 mg/L Streptomycin and plates were incubated at 25 °C for 14 days73. Seeds were then soaked overnight for 12 h in conidial suspensions titrated at 1 × 108 conidia mL−1. For the controls, sterilized seeds were soaked overnight for 12 h in sterile distilled titrated (0.05% Triton X-100) water23,28. Seeds were then transferred into plastic pots (8 cm diameter × 7.5 cm high) containing the planting substrate with a volume of 0.5 L (mixture of manure and soil 1:5). The substrate was sterilized in an autoclave for 2 h at 121 °C and allowed to cool for 72 h prior to planting. Five seeds were sowed per pot and maintained at room temperature (25 ± 2 °C, 60% RH and 12:12 L:D photoperiod). Pots were transferred immediately after germination to the screen house (2.8 m length × 1.8 m width × 2.2 m height) at 25 ± 2 °C, 55% RH and 12:12 L:D photoperiod for 4–5 weeks. After germination, seedlings were thinned to two per pot and watered twice (~ 150 cm3) per day (morning and evening). No additional fertilizer was added to the planting substrate. Plants of 4–5 week-old were used for the various experiments. To determine the colonization of inoculated fungal isolates in tomato and nightshade, plants were carefully uprooted from the pots 4–5 weeks after inoculation and washed under running tap water to remove any soil attached to the plants. Seedlings (ca. 30 cm in height) were divided into three different sections (ca. 5 cm long): leaves, stems and root sections using a sterile scalpel23. Five randomly selected leaf, stem and root sections from each plant were surface-sterilized as described above. The different plant parts were then aseptically cut under a laminar flow hood into 1 × 1 cm pieces before placing the pieces, 4 cm apart on PDA plates supplemented with a 0.05% solution of antibiotic (streptomycin sulphate salt)23,28. Plates were incubated at 25 ± 1 °C for 10 days, after which the presence of endophytes was determined. The last rinse water was also plated to assess the effectiveness of the surface sterilization procedure as described earlier. Plate imprinting was also conducted to assess effective surface sterilization of plant materials74. The colonization of the different plant parts was recorded by counting the number of pieces of the different plant parts that showed the presence of inoculated fungal growth/mycelia according to Koch's postulates75. Only the presence of endophytes that were inoculated was scored. Fungal isolates were identified morphologically using slides which were prepared from the mother plates. Treatments were arranged in a randomized complete block design (RCBD) with four replicates per experiment23. The success rate of fungal endophyte colonization (%) of host plant parts was calculated as follows: $${\text{Colonization }}\left( \% \right) = \frac{{\text{Number of pieces exhibiting fungal outgrowth}}}{{{\text{Total number of pieces plated out}} }} \times 100$$ A colony of T. absoluta was established from wild adults and larvae collected from infested tomato leaves and fruits in Mwea (0° 36′ 31.3″ S 037° 22′ 29.7″ E), Kenya in June 2019. The moths were kept in ventilated, sleeved Perspex cages (40 × 40 × 45 cm) and were fed ad libitum with 10% honey solution placed to the top side of each cage as food source76. Four potted tomato plants were placed in the cages for oviposition. The plants were removed 24 h post-exposure to female insects and transferred to separate wooden cages (50 × 50 × 60 cm) ventilated with netting material at the sides and on the top until the eggs hatched. Leaves with larvae were removed from these plants, three days after the larvae hatched and placed into a clean sterile plastic containers (21 cm long × 15 cm wide × 8 cm high) lined with paper towel to absorb excess moisture and fine netting infused lid for ventilation. The larvae were supplied daily with fresh tomato leaves as food until they pupated. The pupae were collected from the plastic containers using a fine camel hair-brush and placed inside a clean plastic container for adult emergence. The colony was rejuvenated every three months through infusion, with infested tomato leaves collected from the field to reduce inbreeding13,76. Insects were maintained under a rearing condition of 28 ± 2 °C, 48% RH and 12:12 L:D photoperiod at the Animal Rearing and Quarantine Unit (ARQU) of icipe for five generations prior to bioassays13. Pathogenicity of endophytically-colonized tomato and nightshade plants on life history parameters of Tuta absoluta Based on their ability to colonize plant tissues of both host plants, nine isolates (B. bassiana ICIPE 273, ICIPE 35(4), ICIPE 35(15), ICIPE 706, F. proliferatum F2S51, T. harzianum F2L41, T. atroviride F2S21, H. lixii F3ST1 and T. asperellum M2RT4) were tested for their impact against oviposition potential, eggs, larval and pupal mortality, adult emergence and longevity of T. absoluta. Two-day-old mated adults (10 individuals at sex ratio of 1:2 male: female) were exposed for 48 h to four-week-old endophytically-colonized host plant seedlings in Plexiglas cages (50 cm × 50 cm × 45 cm). Each cage contained four potted plants that represented a treatment, and was maintained at 25 ± 2 °C, 40% RH and 12:12 L: D photoperiod. All the treatments were arranged in a randomized complete block design and the experiment replicated four times. After 48 h post-exposure, insects were removed from the cages and introduced into clean cages (20 cm × 20 cm × 20 cm) and their survival was recorded by counting the number of live adults daily inside the cages until all moths died23. For each treatment, 10 female adults T. absoluta were monitored and the experiment was replicated four times. Eggs that were laid on endophytically-colonized and control plants were maintained on the plants until they hatched. After hatching, larvae were allowed to feed upon their natal plants until they reached the 2nd and 3rd instars (approximately 8–10 d post-exposure). In the control, plants were not inoculated with fungal pathogens. For each treatment, the number of eggs laid on each plant was recorded as well as the number of mines and this was replicated four times. Using a fine paint brush, larvae were transferred into cages containing four potted plants that were in the same developmental stage as the one on which the caterpillars had hatched and had been feeding previously. Dead moths were placed on Petri dishes lined with damp sterilized filter paper to allow fungal growth on the surface of the cadaver (mycosis test). Caterpillars were allowed to feed freely on the potted plants in a cage until they pupated. For each treatment, pupation was recorded daily and pupae were collected from leaves 10–11 d post-exposure, counted and then incubated at 25 ± 2 °C. Adult emergence was determined for each treatment, and non-viable pupae were also counted. Following adult emergence from the endophytically-colonized and control plants, 20 adult moths were selected per treatment and the survival of F1 progenies was recorded daily until all moths died and this was replicated four times77. The moths were maintained in a cage as described in section "insects" above. A 10% honey solution was provided as food and cages maintained at 25 ± 2 °C, 48% RH and 12:12 L:D photoperiod. To confirm that the mortality of the moths was as a result of direct fungal infection, dead insects were placed on a moistened filter paper in Petri dishes and were observed for post-mortem fungal sporulation (mycosis test). Mycosis was assessed by surface sterilizing the dead moths with 1% sodium hypochlorite followed by three rinses with sterile distilled water, after which the sterilized cadavers were placed on sterile wet filter paper in sterile Petri dishes that were then sealed with Parafilm and kept at room temperature. Each treatment consisted of 10 insects and replicated four times. Colonization rate and count data (number of eggs, mines, pupae and adults) were tested for normality using Shapiro–Wilk test78 and homogeneity of variance using Levene test. The data were not normally distributed and variances were not homogeneous, therefore colonization rate and adult emergence data were analyzed with generalized linear model (GLM) using binomial distribution and logit link function. Count data were analyzed with generalized linear model (GLM) with negative binomial error distribution taking into account overdispersion. Whenever there was a difference, the means were separated using Tukey's honest significant difference (HSD) test using "agricolae" package in R79. The survival curves were generated using Kaplan–Meier estimator method, and log-rank test was used to compare the effect of the various fungal isolates on T. absoluta exposed adults and F1 progenies survival using the "Survival" package80. To test for differences in survival rate among the treatments, we calculated Cox's proportional hazard81. All analyses were performed using the R (version 3.6.2) statistical software packages82 and all statistical results were considered significant at the confidence interval of 95% (P < 0.05). All insect rearing, handling and experiments were performed using standard operating procedures at the icipe Animal Rearing and Quarantine Unit as approved by the National Commission of Science, Technology and Innovations, Kenya (License No: NACOSTI/P/20/4253). The dataset generated during the current study are available from the corresponding author upon request. Ekesi, S., Chabi-Olaye, A., Subramanian, S. & Borgemeister, C. Horticultural pest management and the African economy: successes, challenges and opportunities in a changing global environment. Acta Hortic. 911, 165–183 (2011). Pratt, C. F., Constantine, K. L. & Murphy, S. T. Economic impacts of invasive alien species on African smallholder livelihoods. Glob. Food Secur. 14, 31–37 (2017). Desneux, N., Luna, M. G., Guillemaud, T. & Urbaneja, A. The invasive South American tomato pinworm, Tuta absoluta, continues to spread in Afro-Eurasia and beyond: the new threat to tomato world production. J. Pest Sci. 84, 403–408 (2011). Idriss, G. E. A. et al. Host range and effects of plant species on preference and fitness of Tuta absoluta (Lepidoptera: Gelechiidae). J. Econ. Entomol. https://doi.org/10.1093/jee/toaa002 (2020). Aigbedion-Atalor, P. O. et al. The South America tomato leafminer, Tuta absoluta (Lepidoptera: Gelechiidae), spreads its wings in Eastern Africa: distribution and socioeconomic impacts. J. Econ. Entomol. 112, 2797–2807 (2019). Biondi, A., Guedes, R. N. C., Wan, F.-H. & Desneux, N. Ecology, worldwide spread, and management of the invasive South American tomato pinworm, Tuta absoluta: past, present, and future. Annu. Rev. Entomol. 63, 239–258 (2018). Desneux, N. et al. Biological invasion of European tomato crops by Tuta absoluta: ecology, geographic expansion and prospects for biological control. J. Pest Sci. 83, 197–215 (2010). Guedes, R. N. C. C. et al. Insecticide resistance in the tomato pinworm Tuta absoluta: patterns, spread, mechanisms, management and outlook. J. Pest Sci. 92, 1329–1342 (2019). Dimbi, S., Maniania, N. K. & Ekesi, S. Horizontal transmission of Metarhizium anisopliae in fruit flies and effect of fungal infection on egg laying and fertility. Insects 4, 206–216 (2013). Maniania, N. K., Ekesi, S. & Dolinski, C. Entomopathogens routinely used in pest control strategies: orchards in tropical climate. In Microbial Control of Insect and Mite Pests: From Theory to Practice (Elsevier Inc., 2016). https://doi.org/10.1016/B978-0-12-803527-6.00018-4. Mweke, A. et al. Evaluation of the entomopathogenic fungi Metarhizium anisopliae, Beauveria bassiana and Isaria sp. for the management of Aphis craccivora (Hemiptera: Aphididdae). J. Econ. Entomol. 111, 1587–1594 (2018). ADS CAS PubMed Article Google Scholar Akutse, K. S. et al. Ovicidal effects of entomopathogenic fungal isolates on the invasive Fall armyworm Spodoptera frugiperda (Lepidoptera: Noctuidae). J. Appl. Entomol. 143, 626–634 (2019). Akutse, K. S., Subramanian, S., Khamis, F. M., Ekesi, S. & Mohamed, S. A. Entomopathogenic fungus isolates for adult Tuta absoluta (Lepidoptera: Gelechiidae) management and their compatibility with Tuta pheromone. J. Appl. Entomol. https://doi.org/10.1111/jen.12812 (2020). Inglis, G. D., Goettel, M. S., Butt, T. M. & Strasser, H. Use of hyphomycetous fungi for managing insect pests. In Fungi as Biocontrol Agents: Progress, Problems and Potential (eds. Butt, T. M. & Magan, M.) 23–69 (2001). https://doi.org/10.1079/9780851993560.0023. Behie, S. W. & Bidochka, M. J. Ubiquity of insect-derived nitrogen transfer to plants by endophytic insect-pathogenic fungi: an additional branch of the soil nitrogen cycle. Appl. Environ. Microbiol. 80, 1553–1560 (2014). Akutse, K. S., Khamis, F. M., Ekesi, S., Wekesa, S. & Subramanian, S. Effect of endophytically-colonized tomato and nightshade host plants on life-history parameters of Tuta absoluta (Lepidoptera: Gelechiidae). (International Congress on Invertebrate Pathology and Microbial Control and 52nd Annual Meeting of the Society for Invertebrate Pathology & 17th Meeting of the IOBC‐WPRS Working Group "Microbial and Nematode Control of Invertebrate Pests", 2019). Wilson, D. Endophyte: the evolution of a term, and clarification of its use and definition. Oikos 73, 274–276 (1995). Quesada-Moraga, E., Muñoz-Ledesma, F. J. & Santiago-Álvarez, C. Systemic protection of Papaver somniferum L. against Iraella luteipes (Hymenoptera: Cynipidae) by an endophytic strain of Beauveria bassiana (Ascomycota: Hypocreales). Environ. Entomol. 38, 723–730 (2009). Barelli, L., Moonjely, S., Behie, S. W. & Bidochka, M. J. Fungi with multifunctional lifestyles: endophytic insect pathogenic fungi. Plant Mol. Biol. 90, 657–664 (2016). Latz, M. A. C., Jensen, B., Collinge, D. B. & Jørgensen, H. J. L. Endophytic fungi as biocontrol agents: elucidating mechanisms in disease suppression. Plant Ecol. Divers. 11, 555–567 (2018). Ownley, B. H. et al. Beauveria bassiana: endophytic colonization and plant disease control. J. Invertebr. Pathol. 98, 267–270 (2008). Akello, J. & Sikora, R. Systemic acropedal influence of endophyte seed treatment on Acyrthosiphon pisum and Aphis fabae offspring development and reproductive fitness. Biol. Control 61, 215–221 (2012). Akutse, K. S., Maniania, N. K., Fiaboe, K. K. M., Van den Berg, J. & Ekesi, S. Endophytic colonization of Vicia faba and Phaseolus vulgaris (Fabaceae) by fungal pathogens and their effects on the life-history parameters of Liriomyza huidobrensis (Diptera: Agromyzidae). Fungal Ecol. 6, 293–301 (2013). Russo, M. L. et al. Endophytic effects of Beauveria bassiana on Corn (Zea mays) and its herbivore, Rachiplusia nu (Lepidoptera: Noctuidae). Insects 10, 2–9 (2019). Lahrmann, U. et al. Host-related metabolic cues affect colonization strategies of a root endophyte. Proc. Natl. Acad. Sci. USA 110, 13965–13970 (2013). Fadiji, A. E. & Babalola, O. O. Elucidating mechanisms of endophytes used in plant protection and other bioactivities with multifunctional prospects. Front. Bioeng. Biotechnol. 8, 1–20 (2020). Gathage, J. W. et al. Prospects of fungal endophytes in the control of Liriomyza leafminer flies in common bean Phaseolus vulgaris under field conditions. Biocontrol 61, 741–753 (2016). Muvea, A. M. et al. Colonization of onions by endophytic fungi and their impacts on the biology of Thrips tabaci. PLoS ONE 9, 1–7 (2014). Powell, W. A., Klingeman, W. E., Ownley, B. H. & Gwinn, K. D. Evidence of endophytic Beauveria bassiana in seed-treated tomato plants acting as a systemic entomopathogen to larval Helicoverpa zea (Lepidoptera: Noctuidae). J. Entomol. Sci. 44, 391–396 (2009). Klieber, J. & Reineke, A. The entomopathogen Beauveria bassiana has epiphytic and endophytic activity against the tomato leaf miner Tuta absoluta. J. Appl. Entomol. 140, 580–589 (2016). Resquín-romero, G., Garrido-jurado, I., Delso, C., Ríos-moreno, A. & Quesada-moraga, E. Transient endophytic colonizations of plants improve the outcome of foliar applications of mycoinsecticides against chewing insects. J. Invertebr. Pathol. 136, 23–31 (2016). PubMed Article CAS Google Scholar Mutune, B. et al. Fungal endophytes as promising tools for the management of bean stem maggot Ophiomyia phaseoli on beans Phaseolus vulgaris. J. Pest Sci. 89, 993–1001 (2016). Posada, F., Aime, M. C., Peterson, S. W., Rehner, S. A. & Vega, F. E. Inoculation of coffee plants with the fungal entomopathogen Beauveria bassiana (Ascomycota: Hypocreales). Mycol. Res. 111, 748–757 (2007). Bing, L. A. & Lewis, L. C. Suppression of Ostrinia nubilalis (Hübner) (Lepidoptera: Pyralidae) by endophytic Beauveria bassiana (Balsamo) Vuillemin. Environ. Entomol. 20, 1207–1211 (1991). Behie, S. W., Jones, S. J., Bidochka, M. J. & Hyde, K. Plant tissue localization of the endophytic insect pathogenic fungi Metarhizium and Beauveria. Fungal Ecol. 13, 112–119 (2015). Akello, J. et al. Beauveria bassiana (Balsamo) Vuillemin as an endophyte in tissue culture banana (Musa spp.). J. Invertebr. Pathol. 96, 34–42 (2007). Posada, F. J. & Vega, F. E. A new method to evaluate the biocontrol potential of single spore isolates of fungal entomopathogens. J. Insect Sci. 5, 1–10 (2005). Demers, J. E., Gugino, B. K. & del Jiménez-Gasco, M. Highly diverse endophytic and soil Fusarium oxysporum populations associated with field-grown tomato plants. Appl. Environ. Microbiol. 81, 81–90 (2015). Bogner, C. W. et al. Fungal root endophytes of tomato from Kenya and their nematode biocontrol potential. Mycol. Prog. 15, 1–17 (2016). Hardoim, P. R. et al. The hidden world within plants: ecological and evolutionary considerations for defining functioning of microbial endophytes. Microbiol. Mol. Biol. Rev. 79, 293–320 (2015). Martin, J. T. Role of cuticle in the defense against plant disease. Annu. Rev. Phytopathol. 2, 81–100 (1964). Jensen, R. E., Enkegaard, A. & Steenberg, T. Increased fecundity of Aphis fabae on Vicia faba plants following seed or leaf inoculation with the entomopathogenic fungus Beauveria bassiana. PLoS ONE 14, 1–12 (2019). Landa, B. B. et al. In-planta detection and monitorization of endophytic colonization by a Beauveria bassiana strain using a new-developed nested and quantitative PCR-based assay and confocal laser scanning microscopy. J. Invertebr. Pathol. 114, 128–138 (2013). Bing, L. A. & Lewis, L. C. Endophytic Beauveria bassiana (Balsamo) Vuillemin in corn: The influence of the plant growth stage and Ostrinia nubilalis (Hubner). Biocontrol Sci. Technol. 2, 39–47 (1992). Greenfield, M. et al. Beauveria bassiana and Metarhizium anisopliae endophytically colonize cassava roots following soil drench inoculation. Biol. Control 95, 40–48 (2016). Card, S., Johnson, L., Teasdale, S. & Caradus, J. Deciphering endophyte behaviour: the link between endophyte biology and efficacious biological control agents. FEMS Microbiol. Ecol. 92, 1–19 (2016). Philippot, L., Raaijmakers, J. M., Lemanceau, P. & Van Der Putten, W. H. Going back to the roots: the microbial ecology of the rhizosphere. Nat. Publ. Gr. 11, 789–799 (2013). Tumuhaise, V. et al. Pathogenicity and performance of two candidate isolates of Metarhizium anisopliae and Beauveria bassiana (Hypocreales: Clavicipitaceae) in four liquid culture media for the management of the legume pod borer Maruca vitrata (Lepidoptera: Crambidae). Int. J. Trop. Insect Sci. 35, 34–47 (2015). Branine, M., Bazzicalupo, A. & Branco, S. Biology and applications of endophytic insect-pathogenic fungi. PLoS Pathog. 15, 1–7 (2019). Barelli, L., Moreira, C. C. & Bidochka, M. J. Initial stages of endophytic colonization by Metarhizium involves rhizoplane colonization. Microbiology 164, 1531–1540 (2018). Wyrebek, M., Huber, C., Sasan, R. K. & Bidochka, M. J. Three sympatrically occurring species of Metarhizium show plant rhizosphere specificity. Microbiology 157, 2904–2911 (2011). Muvea, A. M. et al. Behavioral responses of Thrips tabaci Lindeman to endophyte-inoculated onion plants. J. Pest Sci. 88, 555–562 (2015). Slansky, F. Jr. Insect nutrition: an adaptationist's perspective. Florida Entomol. 65, 45–71 (1982). Carroll, G. Fungal endophytes in stems and leaves: from latent pathogen to mutualistic symbiont. Ecology 69, 2–9 (1988). Allegrucci, N., Velazquez, M. S., Russo, M. L., Perez, E. & Scorsetti, A. C. Endophytic colonisation of tomato by the entomopathogenic fungus Beauveria bassiana: the use of different inoculation techniques and their effects on the tomato leafminer Tuta absoluta (Lepidoptera : Gelechiidae). J. Plant Prot. Res. 54, 331–337 (2017). Barta, M. In planta bioassay on the effects of endophytic Beauveria strains against larvae of horse-chestnut leaf miner (Cameraria ohridella). Biol. Control 121, 88–98 (2018). Russo, M. L. et al. Effect of endophytic entomopathogenic fungi on soybean Glycine max (L.) Merr. growth and yield. J. King Saud Univ. Sci. 31, 728–736 (2018). Contreras-cornejo, H. A., Macías-rodríguez, L. & Larsen, J. The root endophytic fungus Trichoderma atroviride induces foliar herbivory resistance in maize plants. Appl. Soil Ecol. 124, 45–53 (2017). Contreras-Cornejo, H. A., Macías-Rodríguez, L., Del Val, E. & Larsen, J. Ecological functions of Trichoderma spp. and their secondary metabolites in the rhizosphere: interactions with plants. FEMS Microbiol. Ecol. 92, 1–17 (2016). Coppola, M. et al. Trichoderma harzianum enhances tomato indirect defense against aphids. Insect Sci. 24, 1025–1033 (2017). Meera, M. S., Shivanna, M. B., Kageyama, K. & Hyakumachi, M. Persistence of induced systemic resistance in cucumber in relation to root colonization by plant growth promoting fungal isolates. Crop Prot. 14, 123–130 (1995). Lewis, L. C., Berry, E. C., Obrycki, J. J. & Bing, L. A. Aptness of insecticides (Bacillus thuringiensis and carbofuran ) with endophytic Beauveria bassiana, in suppressing larval populations of the European corn borer. Agric. Ecosyst. Environ. 57, 27–34 (1996). Qayyum, M. A., Wakil, W., Arif, M. J., Sahi, S. T. & Dunlap, C. A. Infection of Helicoverpa armigera by endophytic Beauveria bassiana colonizing tomato plants. Biol. Control 90, 200–207 (2015). Jallow, M. F. A., Dugassa-Gobena, D. & Vidal, S. Influence of an endophytic fungus on host plant selection by a polyphagous moth via volatile spectrum changes. Arthropod. Plant. Interact. 2, 53–62 (2008). Jaber, L. R. & Vidal, S. Fungal endophyte negative effects on herbivory are enhanced on intact plants and maintained in a subsequent generation. Ecol. Entomol. 35, 25–36 (2010). Davis, T. S., Crippen, T. L., Hofstetter, R. W. & Tomberlin, J. K. Microbial volatile emissions as insect semiochemicals. J. Chem. Ecol. 39, 840–859 (2013). Silva, D. B., Bueno, V. H. P., Lins, J. C. & Van Lenteren, J. C. Life history data and population growth of Tuta absoluta at constant and alternating temperatures on two tomato lines. Bull. Insectol. 68, 223–232 (2015). Pereyra, P. C. & Sánchez, N. E. Effect of two solanaceous plants on developmental and population parameters of the tomato leaf miner, Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae). Neotrop. Entomol. 35, 671–676 (2006). Dash, C. K. et al. Endophytic entomopathogenic fungi enhance the growth of Phaseolus vulgaris L. (Fabaceae) and negatively affect the development and reproduction of Tetranychus urticae Koch (Acari: Tetranychidae). Microb. Pathog. 125, 385–392 (2018). Akello, J., Dubois, T., Coyne, D. & Kyamanywa, S. Endophytic Beauveria bassiana in banana (Musa spp.) reduces banana weevil (Cosmopolites sordidus) fitness and damage. Crop Prot. 27, 1437–1441 (2008). Golo, P. S. et al. Production of destruxins from Metarhizium spp. fungi in artificial medium and in endophytically colonized cowpea plants. PLoS ONE 9, 1–9 (2014). Goettel, M. S. & Inglis, D. G. Fungi: Hyphomycetes. Manual of Techniques in Insect Pathology (1997). https://doi.org/10.1016/B978-012432555-5/50013-0. Schulz, B., Guske, S., Dammann, U. & Boyle, C. Endophyte-host interactions. II. Defining symbiosis of the endophyte–host interaction. Symbiosis 25, 213–227 (1998). Inglis, G. D., Enkerli, J. & Goettel, M. S. Laboratory Techniques Used for Entomopathogenic Fungi. Hypocreales. Manual of Techniques in Invertebrate Pathology (Elsevier, New York, 2012). https://doi.org/10.1016/B978-0-12-386899-2.00007-5 Petrini, O. & Fisher, P. J. Fungal endophytes in Salicornia perennis. Trans. Br. Mycol. Soc. 87, 647–651 (1986). Aigbedion-Atalor, P. O. et al. Host stage preference and performance of Dolichogenidea gelechiidivoris (Hymenoptera: Braconidae), a candidate for classical biological control of Tuta absoluta in Africa. Biol. Control 144, 1–8 (2020). Oliveira, F. A., da Silva, D. J. H., Leite, G. L. D., Jham, G. N. & Picanço, M. Resistance of 57 greenhouse-grown accessions of Lycopersicon esculentum and three cultivars to Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae). Sci. Hortic. (Amsterdam) 119, 182–187 (2009). Shapiro, S. S. & Wilk, M. B. An analysis of variance test for normality (complete samples). Biometrika 52, 591–611 (1965). MathSciNet MATH Article Google Scholar De Mendiburu, F. agricolae: statistical procedures for agricultural research. R package version 1.3–2 https://CRAN.R-project.org/package=agricolae (2020). Therneau, T. A Package for Survival Analysis in R. R package version 3.1-12, https://CRAN.R-project.org/package=survival. (2020). Crawley, M. J. The R Book (Wiley, New York, 2007). https://doi.org/10.1002/9780470515075. Book MATH Google Scholar R Core Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, 2019). This research was funded by the African Union (AU) (Tuta-IPM Project, Contract Number: AURG II-2-123-2018), UK's Foreign, Commonwealth and Development Office (FCDO) (FCDO Biopesticide Project, B2291A- FCDO -BIOPESTICIDE), and BioInnovate Africa Phase I project "Promoting smallholder access to fungal biopesticides through Public Private Partnerships in East Africa" (BA/CI/2017-02 (PROSAFE) through the International Centre of Insect Physiology and Ecology (icipe). The authors gratefully acknowledge the icipe core funding provided by UK's Foreign, Commonwealth and Development Office (FCDO); Swedish International Development Cooperation Agency (Sida); the Swiss Agency for Development and Cooperation (SDC); the Federal Democratic Republic of Ethiopia; and the Government of the Republic of Kenya. The first author was supported through the Dissertation and Research Internship Program (DRIP) of icipe. We are also thankful to Dr. Daisy Salifu for her statistical advice, and to Sospeter Wafula, Jane Kimemia and Levi Ombura for their technical assistance. The views expressed herein do not necessarily reflect the official opinion of the donors. International Centre of Insect Physiology and Ecology (icipe), P.O. Box 30772-00100, Nairobi, Kenya Ayaovi Agbessenou, Komivi S. Akutse, Sunday Ekesi, Sevgan Subramanian & Fathiya M. Khamis Department of Zoology and Entomology, University of Pretoria, Private Bag X20, Hatfield, 0028, South Africa Ayaovi Agbessenou & Abdullahi A. Yusuf Forestry and Agricultural Biotechnology Institute (FABI), University of Pretoria, Private Bag X20, Hatfield, 0028, South Africa Abdullahi A. Yusuf Ayaovi Agbessenou Komivi S. Akutse Sunday Ekesi Sevgan Subramanian Fathiya M. Khamis A.A., F.M.K. and K.S.A. conceived and designed the experiment. A.A. performed the experiment and analyzed the data. A.A., K.S.A., A.A.Y., S.E., S.S. and F.M.K. wrote the manuscript. All authors have read and agreed to the published version of the manuscript. Correspondence to Komivi S. Akutse. The authors declare no competing interests. Agbessenou, A., Akutse, K.S., Yusuf, A.A. et al. Endophytic fungi protect tomato and nightshade plants against Tuta absoluta (Lepidoptera: Gelechiidae) through a hidden friendship and cryptic battle. Sci Rep 10, 22195 (2020). https://doi.org/10.1038/s41598-020-78898-8 Accepted: 01 December 2020 Temperature-dependent modelling and spatial prediction reveal suitable geographical areas for deployment of two Metarhizium anisopliae isolates for Tuta absoluta management Scientific Reports (2021)
CommonCrawl
Atomic structure and crystallography of joints in SnO2 nanowire networks Viktor Hrkac1, Niklas Wolff1, Viola Duppel2, Ingo Paulowicz3, Rainer Adelung4, Yogendra Kumar Mishra4 & Lorenz Kienle1 Joints of three-dimensional (3D) rutile-type (r) tin dioxide (SnO2) nanowire networks, produced by the flame transport synthesis (FTS), are formed by coherent twin boundaries at (101)r serving for the interpenetration of the nanowires. Transmission electron microscopy (TEM) methods, i.e. high resolution and (precession) electron diffraction (PED), were utilized to collect information of the atomic interface structure along the edge-on zone axes [010]r, [111]r and superposition directions [001]r, [101]r. A model of the twin boundary is generated by a supercell approach, serving as base for simulations of all given real and reciprocal space data as for the elaboration of three-dimensional, i.e. relrod and higher order Laue zones (HOLZ), contributions to the intensity distribution of PED patterns. Confirmed by the comparison of simulated and experimental findings, details of the structural distortion at the twin boundary can be demonstrated. The capability of developing functional (nano-)materials for complex or high-tech applications originates in the deliberate control of innovative and sophisticated manufacturing processes (Davis 2002; Dick et al. 2004; Mathur et al. 2005). In the field of stretchable and porous ceramics, the flame transport synthesis (FTS) emerged as a unique method for a rapid production of macroscopic amounts of oxide semiconductor or ceramic materials with tunable properties (Mishra et al. 2013). With variation of process parameters and utilization of various metal oxides (e.g. ZnO, Fe2O3, Al2O3, TiO2) a fabrication of several three dimensionally interconnected network systems is realized. The latter can be categorized into different synthesis classes, forming all by the combination of quasi one-dimensional (Q1D) nano−/microstructure building blocks, as described elsewhere (Xia et al. 2003; Zhang et al. 2003). In this manner, flexible macroscopic materials with the advantageous properties of ceramics are formed, enabling the application in demanding fields such as light-weight space technologies, high-temperature flexible sensors or stretchable implants (Rice 1998; Klawitter and Hulbert 1971; Sousa et al. 2003). The junctions of the building blocks are based on structural defects at the nanoscale. Thus, the mere presence, concentration and type of these defects define the morphology, the physical properties, e.g. piezoelectricity, of such a network and subsequently the efficiency/ performance of potential devices (Molarius et al. 2004). In other words, the enhancement or intended manipulation progression of certain properties is driven by a complete understanding of the real structure-to-property relation. As immanent task the investigation of these defects with adequate measuring equipment becomes compulsory: Conventional and advanced transmission electron microscopy (TEM) enables an analytical approach of studying such defects and deducing required structure models (Hrkac et al. 2015). In this work, detailed TEM investigations are reported for the junction forming structural defects, i.e. twin boundaries, of FTS synthesized flexible 3D networks based on tin oxide (SnO2). Property and basic structural investigations are provided in previous works elsewhere (Paulowicz et al. 2015). It must be emphasized, that the understanding of structural defect phenomena observed in TEM lies in the possibility of their computational simulation and, thus, in their quantitative description. The main challenge is the combination of comparatively uncomplicated simulations of periodic structures with those of non-periodic objects, e.g. twin boundary, planar defect, antiphase boundary. Although models were already described for SnO2 defect structures, a substantial improvement can still be achieved, especially considering the simulation of the high-resolution contrast by applying a supercell approach. This approach (Deiseroth et al. 2004; Kienle and Simon 2002) provides such upsides and is used to transform common unit cells into elaborated defect models using exclusively basic crystallographic principles. Further a step-by-step guide is provided to generate a complete 3D model of the SnO2 twin boundary, which is verified by cross-sectional and superposition TEM data of the defect structure. Details of the synthesis process can be found elsewhere (Mishra et al. 2013). Scanning electron microscopy (SEM) studies were conducted with a Carl Zeiss microscope (10 kV, 10 μA). High resolution transmission electron microscopy (HRTEM) has been realized with a Philips CM 30 ST microscope (LaB6 cathode, 300 kV, CS = 1.15 mm). Sample preparation for TEM was carried out by a grinding method and subsequent placement of the specimen on a lacey carbon/copper grid. This grid was fixed in a side-entry, double-tilt holder with a maximum tilting of ±25° in two directions. A spinning star device enabled precession electron diffraction (PED) (Schürmann et al. 2011). Simulations of HRTEM micrographs and PED patterns have been calculated using the JEMS program package (Stadelmann 1987) and in particular a multi-slice formalism (JEMS preferences: spread of defocus: 70 Å, illumination semi-angle: 1.2 mrad). For the evaluation of ED patterns and HRTEM micrographs (including Fourier filtering) the program Digital Micrograph 3.6.1 (Gatan) was used. A contrast enhancement of the micrographs was performed with a HRTEM filter plug-in for DM (Kilaas 1998). Chemical analyses by energy-dispersive X-ray spectroscopy (EDS) were performed with a Si/Li detector (Noran, Vantage System). Supercell approach A planar defect of the crystal structure is converted into a pseudo periodic feature by embedding into a suitable supercell. The entire supercell approach subdivides into three general steps: The ideal structure from the material of choice must be transformed into a triclinic (P1) structure considering the defect type and its orientation, cf. Fig. 1a. As a consequence of the symmetry reduction, all symmetry restrictions according to the ideal structure are eliminated with a simultaneous preservation of all these symmetry elements as pseudo-symmetry elements. Note, optimized metrics of the supercell, i.e. all angles of the generated cell are restricted to 90°, may be preferred or are essential for specific defect types. Otherwise the description of both domains separated at the defect in one common unit cell is inhibited. Any deviation from this coincidence condition can result in severe deviations between final simulations and experimental observation (Hrkac et al. 2013). The mathematical formalism of an unit-cell transformation into an unconventional description is given in (Hahn 2002): $$ \left({a}^{\hbox{'}},{b}^{\hbox{'}},{c}^{\hbox{'}}\right)=\left(a,b,c\right)P, and\left(\begin{array}{c}{u}^{\hbox{'}}\\ {}{v}^{\hbox{'}}\\ {}{w}^{\hbox{'}}\end{array}\right)=Q\left(\begin{array}{c}u\\ {}v\\ {}w\end{array}\right)\ \mathrm{with}\kern0.5em Q={P}^{\hbox{-} 1} $$ Supercell approach: a Stylized representation of periodic crystal (black bordered cubes as unit cells). The supercell (red cuboid) is extracted from the periodic crystal with one facet (blue plane) as embedded defect within the crystal. b Comparison: supercells with a defect plane (blue) fulfilling (left) and violating (right) the coincidence condition for the superposition and separation steps: i) Inversion/ mirroring of the supercell along the defect plane. ii) Superposition of the supercell with its inversed duplicate. iii) Separated SPS including the defect plane as domain boundary (a, b, c) represent the basis vector of the direct space, u, v, w are indices of a direction in direct space, (') as mark of the parameters for the P1 cell: P and Q as (3 × 3) square matrices, linear parts of an affine transformation. The characteristic symmetry element of the defect type defines the number and the construction manner of modified supercells, see the example in Fig. 1b: The blue side surface in the sketch represents a twin interface (mirror plane) separating the original supercell with its mirrored counterpart (Fig. 1b-i, left). By superimposing and merging these cells into one, so-called superposition structure (SPS) can be constructed (Fig. 1b-ii, left). Violating the 90° (coincidence) restriction will lead to an artifact-containing SPS, cf. Fig. 1b-i/ -ii, right. An isolation or segregation of individual domains within the SPS is executed which creates a new (separation-)cell with the domain boundary as center part, see Fig. 1b-iii. The fibrous morphology of the 3D SnO2 network is illustrated in Fig. 2a. This FTS network is constructed by nano−/ microwires and nanobelts interconnecting via junctions as exemplarily demonstrated in the inset of Fig. 2a. A detailed structural characterization of such features was carried out by TEM: All obtained data confirm a rutile type structure (index label: r) with the tetragonal space group P42/mnm. Chemical impurities within the investigated areas can be excluded via accompanied EDS analyses which exhibited exclusively a composition of SnO2. The appearance of twinning defects was a frequently observed feature, as exemplary presented in HRTEM micrographs of Fig. 2b, c. In both cases, the analysis of fast Fourier transform (FFT) pattern identifies the common twin plane to be {101}r with a rotation angle between the mirrored domain individuals of 68.5°. Note, in all investigated cases only this twin type was observed. Following previous studies (Zheng et al. 1996) the observed twin can be classified as growth twin consisting of a coherent twin boundary (CTB). Additionally, a tendency of interpenetration twinning was reported for the CTB explaining the complex macroscopic morphology of single crystals and the entire networks. Three-dimensional network composed of interconnected SnO2 micro- and nanostructures: Scanning electron micrographs of a an overview network and a representative junction (inset). High resolution transmission electron micrographs of the SnO2 twin interface with the (101)r coherent twin boundary along the b [010]r and c [111]r zone axes. Circular insets: respective Fast Fourier Transformation patterns with the 101 intensities marked. Rectangular insets: emphasized views on the (101)r twin boundary Transformation of the ideal SnO2 structure The tetragonal rutile-type structure (space group: P42/mnm) is transformed to a rectangular P1 supercell by applying the matrices (Fig. 3a): $$ P=\left(\begin{array}{ccc}1& 0& 1\\ {}0& 1& 0\\ {}-5& 0& 11\end{array}\right),Q=\left(\begin{array}{ccc}11/16& 0& 5/16\\ {}0& 1& 0\\ {}-1/16& 0& 1/16\end{array}\right) $$ Supercell approach for the coherent twin boundary (101)r/[010]r of the SnO2 rutile type structure: a Supercell, b SPS and c basic separation model. See text for details The new cell parameters are a' = 0.5706 nm, b' = 0.4735 nm, c' = 4.227 nm and α' =90°, β' = 90.14°, γ' = 90°. Note, for the upcoming steps and simulations β' is set to 90° in order to fulfill the coincidence condition. This approximation is legitimate as the slight deviation results in a negligible error. Further, all transformed indices are labeled with t. In notation of the supercell, the directions [010]r, [111]r, [001]r transform to [010]t, [110]t and [501]t, respectively. Creation of the SPS A second supercell is created by mirroring its atomic positions with respect to the twin boundary. With a subsequent superimposition of both supercells a SPS is obtained, cf. Fig. 3b. Separation and shift The number of atoms of the SPS has been reduced by deliberately eliminating one supercell individual from each half cell, see plain and red marked regions from Fig. 3c. Additionally, the redundant duplicates of the central atoms (fine dotted line) were removed. Thus, this basic separation model consists of two single domains mirrored at an incorporated twin boundary. The atomic distances at the interface are modified to match up with the corresponding distances within the bulk by applying a shift vector (\( \frac{1}{2} \) [010]t) on one single domain of the defect model, as demonstrated by the arrows in Fig. 4a. A polyhedral representation of the basic and improved defect model (Fig. 4b and c-i, c-ii) emphasizes the necessity of the vector: Only the improved model provides meaningful atomic distances in the vicinity of the central Sn atoms, cf. yellow marked octahedrons in Fig. 4b, c. Basic (left) vs. improved (right) defect model along the a [100]t and b [010]t direction. c Excerpted SnO6 octahedrons: coherent twin boundary from i) basic vs. ii) improved models and iii) bulk octahedron, values are given in picometer. See text for details As the twin boundary became an incorporated feature of the supercell, simulations of multiple twinning are also enabled: A combination of slightly modified cells from step 1 and 3 can be used to tailor adequate three-dimensional (multi-)defect models. Moreover, the atomic coordinates of the derived supercells and corresponding transformation matrices can be adapted for other rutile-type structures which contain the same {101}r twin defect, e.g. considering different c/a ratios. The improved model enables further the focused accentuation of the different SnO6-octahedrons, namely the CTB and defect-free bulk types (see Fig. 4c), and the determination of the degree of distortion for these octahedrons at the CTB. The atomic distances differ up to ca. 15%. Note, ab-initio calculations and experimental observations with aberration corrected TEM could provide enhanced information of the structural nature at the CTB, particularly about oxygen atom positions, and support probable modification of the defect model. An in-depth analysis of the twinned rutile type structure is carried out by performing a multiple zone axis study; see highlighted zone axes in the stereographic projection of Fig. 5a. a Stereographic projection of the rutile type SnO2 with emphasis on the experimentally studied zone axes. b High resolution micrographs with simulations based on the defect model as inset along -i) [010]r, −ii) [111]r and -iii) [001]r. See text for details Tilting experiments I – real structure The selected directions for the analyses are including the CTB in cross-section (i.e. [010]r and [111]r) and indirect plane view (i.e. [001]r). High resolution contrasts were recorded for the marked directions and evaluated with simulations based on the modified separation model of step 3, cf. Fig. 5b: An excellent agreement between the respective calculation and experimental data-couples was achievable by identifying the parameter settings, i.e. objective lens defocus (Δf) and specimen thickness (t) for-i) [010]r: Δf = 0 nm - t = 4.73 nm, −ii) [111]r: Δf = − 52 nm - t = 4.45 nm and -iii) [001]r: Δf = 0 nm - t = 5.73 nm. All depicted high resolution contrasts exhibit distinct features, i.e. a clear edge-on view on the CTB (Fig. 5b-i), a strong contrast deviation of adjacent twin domains stemming from an oblique view to the CTB (Fig. 5b-ii), and a superposition contrast (Fig. 5b-iii). The perpendicular main directions [010]r and [001]r were selected for additional defocus series, as presented in Fig. 6. The series in Fig. 6a demonstrates the contrast change for the CTB with a specimen thickness of 4.73 nm and Δf of –i) 0 nm, −ii),-51 nm and -iii) -58 nm (close to the Scherzer defocus), respectively. The inserted simulations are in each case in convincing agreement to the experimental data, validating the quality of the defect model. Note, visible contrast deviations result from a strong thickness gradient propagating from bottom to top of each panel. The defocus series of the superposition structure along [001]r (see Fig. 6) is of particular interest: At first glance the experimental data appear to be a conventional single domain contrast with no direct evidence of twinning influence. However, solely the defect model provides an experiment/calculation accordance by using the parameter settings Δf for b-i) 0 nm, b-ii) -5 nm, b-iii) -60 nm and b-iv) -95 nm with a common specimen thickness of 5.73 nm. The deliberate comparison of the single domain SnO2 rutile-type with the defect model simulations further evidences superposition twinning as origin for the high-resolution contrast, cf. the series of square panel comparisons in Fig. 6b. Note, other parameter settings for the rutile-type model exhibit even larger deviations to the presented experimental data. High resolution defocus series of the twin defect introduced in Fig. 5 for the edge directions a [010]r and b [001]r with inserted simulations. For b bottom panels: simulations of (left) rutile-type vs. (right) the defect model with corresponding parameter settings, respectively. The details of respective parameter setting (i, ii, iii, (iv)) are given in the text Tilting experiments II – reciprocal space Precession electron diffraction patterns of the CTB are compared with simulations based on the SPS in a plane view, cf. Fig. 7. Both, simulation and experimental ED pattern are merged for the respective direction (i.e., [010]r in Fig. 7a and [111]r in Fig. 7b) in one common representation, showing a good coinciding match of the intensity. An additional optical support is provided by graphical marks in the ED patterns: The (yellow) diamonds mark common reflections from both domains, while the (blue) circles and (red) squares emphasize the contributions from respective twin domains. Although a first considerable agreement of componential and experimental data can be imputed, a more substantial and critical assessment is achieved by a quantifying examination of the reflection rows, see the roman enumeration and intensity plots of Fig. 7. Straight (blue) lines represent the experimental profile measurements and the dashed (black) lines correlated the simulated equivalents. The high coincidence of associated reflection position validates the accuracy of the supercell approach with respect to geometrical (i.e. lattice) aspects. Considering the intensity distribution of the reflections, a deviation must be consternated. Such a difference is explainable by taking ratio of twin domain volumes into account. The simulation is calculated using the kinematic model and the same occupancy factor for all atoms in the SPS (i.e. one). As consequence, the twin domain ratio is 1:1. The experimental observation of such an idealized case appears to be highly improbable, in particular, considering the TEM preparation and aperture size (illuminated area: 100 nm) in ED mode for this study. Note, the variation of the occupancy factor and/or using the dynamic scattering model can lead to an improved matching between experimental and simulated intensities, see examples elsewhere (Hrkac et al. 2013). Additionally, appearing artifact reflections in the experimental PED pattern stem most probably from other grains. Experimental vs. simulated precession electron diffraction pattern data along a [010]r and b [111]r. See text for details A PED pattern was recorded along the [001]r direction exhibiting the absence of a direct twinning indication, as presented in Fig. 8a. Three Bragg reflection types can be identified within the experimental data: The first type, fundamental Bragg reflections, stems from the P42/mnm rutile-type, the second type, dynamical scattering reflections, are generated by the rather high specimen thickness (neglected for following simulations and discussions) and a third group, which exhibits with an asymmetric intensity distribution on the top hemisphere of the PED pattern. Note, the latter type exclusively appeared in PED mode, the corresponding selected area ED pattern was free of such feature. Kinematic simulations from both ideal rutile-type (Fig. 8b) and defect (Fig. 8c) model recreate the fundamental reflections with identical intensity ratios, consequently fail a clear assignment of the experimental data. An unambiguous identification of the reciprocal data was enabled by the corresponding real structure study illustrated in Fig. 6b. The third type reflections, at first glance, may be interpreted as debris reflections, originating from other grains accidently illuminated during the PED procedure. In some instances, randomly appearing reflections can be detected and explained in this manner. However, the third type reflections display a systematic characteristic, which matches, partially, a higher-order-Laue-zone-(HOLZ) including simulation of the defect model, cf. Fig. 8d. As two twinned components are involved in the formation of this PED pattern, HOLZ or also intersecting relrods from the additional component may cause the (asymmetric) presence of this feature. A further factor within the experimental data is the influence of a potential twin domain ratio deviating from unity. A minor domain will influence HOLZ and relrod formation and may also cause the asymmetry. Note, in other single domain data, experimentally and simulated patterns, no evidence of those third type reflections was found. Superposition views of the coherent twin boundary along [001]r: a Experimental PED pattern along [001]r vs. simulated ED pattern b based on a defect free rutile-type SnO2 model and c the SPS defect model. b and c based on the kinematic model. d Simulated PED including higher order Laue zones An additional example for superposition observation of the CTB was detected for the [101]r direction, cf. PED pattern in Fig. 9a. The precession mode minimized the effects of dynamical scattering, allowing a detection of intensity variations within the Bragg reflection rows, cf. normalized intensity plots. The kinematic calculations from defect (ED along [\( \overline{3} \) 01]t; b-i) and rutile-type model (ED along [101]r, Fig. 9b-ii) contain identical positions of the corresponding Bragg reflections and show a mutual good agreement to the experimentally observed reflections. An in-depth analysis of the patterns emphasizes a deviation in the intensity ratios and favors the defect structure as more appropriate interpretation of the experimental data, see normalized intensity plots. However, based only on these features a clear assignment of the experimental data to one of the structure candidates is impeded, due to the incomplete suppression of dynamical scattering. The most apparent evidence for the latter is the presence of the {101} reflections as marked with solid (yellow) circles. Electron diffraction study: a Experimental PED along [\( \overline{3} \) 01]t, b kinematic simulations along: -i) [\( \overline{3} \) 01]t (defect model) and -ii) [101]r (rutile-type model). Normalized intensity plots are adjacent to the marked areas, respectively. c PED simulation along [\( \overline{3} \) 01]t (defect model). d Kinematic simulation along [100]t (defect model). The circular marked intensities are additional reflections with respect to the kinematic and defect-free rutile type model. See text for details The experimental PED pattern offers, despite the expected zero order (ZO)LZ reflections, most probably Bragg reflections stemming from HOLZ, i.e., the faint reflections marked with dashed circles (Fig. 9a). These reflections appeared exclusively in precession mode and are located on commensurable (h/2, k/2, l/2) lattice positions. A PED simulation based on the defect model reproduces these reflections, cf. marks in the Fig. 9c, and thus verifies HOLZ as potential origin. All HOLZ reflections possess comparably high intensities in the calculated patterns, differing to the general observation in the experimental pattern. As mentioned above, the discrepancy may result from a non 1:1 twin domain ratio and further dynamical scattering influences. It must be noted, that conventional ED simulation with sample thicknesses > 80 nm also shows these commensurable reflections. Another feature of the defect structure is emphasized with the diffraction study: While the directions [101]r and [011]r are completely symmetry equivalent in the rutile-type structure, the corresponding directions (i.e. [\( \overline{3} \) 01]t and [100]t, respectively) exhibit clear variation in the intensity ratios of the kinematic Bragg reflections, see Fig. 9c and d. In this study, a complete TEM characterization was discussed for one of the major defects, i.e. coherent twin boundary at (101)r, creating interconnected FTS-SnO2 3D networks. The analytical process was enabled by deducing a set of defect models under the principle of a supercell approach. For the latter, exclusively crystallographic transformation is applied to the ideal rutile-type structure to develop a periodic structure with the observed defect plane incorporated. Comparison of calculated and experimental data exhibited excellent agreement for edge-on ([010]r, [111]r) and superposition ([001]r, [101]r) views of the defect structure. In particular, the identification of the pseudo single domain contrasts along the [001]r direction was a clear evidence of the model validity. The generation of three-dimensional information from the two-dimensional diffraction pattern, was exclusively enabled by the defect model, and as consequence, quantitative statements of the atomic configuration at the twin interface were achieved. 3D: CTB: Coherent twin boundary EDS: Energy-dispersive X-ray spectroscopy FFT: FTS: Flame transport synthesis HOLZ: Higher order Laue zones HRTEM: High resolution transmission electron microscopy PED: Precession electron diffraction Q1D: Quasi one-dimensional SnO2 : Tin dioxide SPS: Superposition structure TEM: M.E. Davis, Ordered porous materials for emerging applications. Nature 417, 813–821 (2002) H.J. Deiseroth, C. Reiner, K. Xhaxhiu, M. Schlosser, L. Kienle, X‐Ray and Transmission Electron Microscopy Investigations of the New Solids In5S5Cl, In5Se5Cl, In5S5Br, and In5Se5Br. Z. Anorg Allg. Chem. 630, 2319–2328 (2004) K.A. Dick, K. Deppert, M.W. Larsson, T. Martensson, W. Seifert, L.R. Wallenberg, L. Samuelson, Synthesis of branched 'nanotrees' by controlled seeding of multiple branching events. Nat. Mater. 3, 380–384 (2004) T. Hahn (ed.), International tables for crystallography, vol A (Kluwer Academic Publishers, Dordrecht/Boston/ London, 2002) V. Hrkac, L. Kienle, S. Kaps, A. Lotnyk, Y.K. Mishra, U. Schürmann, V. Duppel, B.V. Lotsch, R. Adelung, Superposition twinning supported by texture in ZnO nanospikes. J. Appl. Crystallogr. 46, 396–403 (2013) V. Hrkac, A. Kobler, S. Marauska, A. Petraru, U. Schürmann, V.S.K. Chakravadhanula, V. Duppel, H. Kohlstedt, B. Wagner, V.B. Lotsch, C. Kübel, L. Kienle, Structural study of growth, orientation and defects characteristics in the functional microelectromechanical system material aluminium nitride. J. Appl. Phys. 117, 014301 (2015) L. Kienle, A. Simon, Polysynthetic Twinning in RbIn3S5. J. Solid State Chem. 167, 214–−225 (2002) R. Kilaas, Optimal and near‐optimal filters in high‐resolution electron microscopy. J. Microsc. 190, 45–51 (1998) J.J. Klawitter, S.F. Hulbert, Application of porous ceramics for the attachment of load bearing internal orthopedic applications. J. Biomed. Mater. Res. Symposium. 5, 161–229 (1971) S. Mathur, S. Barth, H. Shen, J.C. Pyun, U. Werner, Size-dependent photoconductance in SnO2 nanowires. Small 1, 713–717 (2005) Y.K. Mishra, S. Kaps, A. Schuchardt, I. Paulowicz, X. Jin, D. Gedamu, S. Freitag, M. Claus, S. Wille, A. Kovalev, S.N. Gorb, R. Adelung, Fabrication of Macroscopically Flexible and Highly Porous 3D Semiconductor Networks from Interpenetrating Nanostructures by a Simple Flame Transport Approach. Part. Part. Syst. Charact. 30, 775–783 (2013) J. Molarius, J. Kaitila, T. Pensala, M. Ylilammi, Piezoelecric ZnO films by r.f sputtering. J. Mater. Sci. Mater. Electron. 14, 431–435 (2003) I. Paulowicz, V. Hrkac, S. Kaps, V. Cretu, O. Lupan, T. Braniste, V. Duppel, I. Tiginyanu, L. Kienle, R. Adelung, Three-Dimensional SnO2 Nanowire Networks for Multifunctional Applications: From High-Temperature Stretchable Ceramics to Ultraresponsive Sensors. Adv. Electron. Mater., 1, 1500081, 1–8 (2015) R.W. Rice, Porosity of ceramics: properties and applications (CRC Press, Boca Raton, 1998) U. Schürmann, V. Duppel, S. Buller, W. Bensch, L. Kienle, Precession Electron Diffraction – a versatile tool for the characterization of Phase Change Materials. Cryst. Res. Technol. 46, 561–568 (2011) J.E. Sousa, P.W. Serruys, M.A. Costa, New Frontiers in Cardiology, Drug-Eluting Stents: Part I. Circulation 107, 2274–2279 (2003) P.A. Stadelmann, EMS - a software package for electron diffraction analysis and HREM image simulation in materials science. Ultramicroscopy 21, 131–145 (1987) Y. Xia, P. Yang, Y. Sun, Y. Wu, B. Mayers, B. Gates, Y. Yin, F. Kim, H. Yan, One‐Dimensional Nanostructures: Synthesis, Characterization, and Applications. Adv. Mater. 15, 353–389 (2003) R.Q. Zhang, Y. Lifshitz, S.T. Lee, Oxide‐Assisted Growth of Semiconducting Nanowires. Adv. Mater. 15, 635–640 (2003) J.G. Zheng, X. Pan, M. Schweizer, F. Zhou, W. Weimer, W. Göpel, M. Rühle, Growth twins in nanocrystalline SnO2 thin films by high‐resolution transmission electron microscopy. J. Appl. Phys. 79, 7688 (1996) LK thanks Prof. Lotsch (Max Planck Institute for Solid State Research) for enabling TEM experiments. Financial funding of this work was provided by the German Research Foundation (CRC 1261, project A6). Synthesis and Real Structure, Institute for Materials Science, Kiel University, Kaiserstr. 2, 24143, Kiel, Germany Viktor Hrkac, Niklas Wolff & Lorenz Kienle Nanochemistry, Max Planck Institute for Solid State Research, Heisenbergstr. 1, 70569, Stuttgart, Germany Viola Duppel Phi-Stone AG, Kaiser Str. 2, 24143, Kiel, Germany Ingo Paulowicz Functional Nanomaterials, Institute for Materials Science, University of Kiel, Kaiser Str. 2, 24143, Kiel, Germany Rainer Adelung & Yogendra Kumar Mishra Viktor Hrkac Niklas Wolff Rainer Adelung Yogendra Kumar Mishra Lorenz Kienle LK, VH and NW contributed in data evaluation and structural modelling; LK and VD performed the TEM measurements; IP and YKM were responsible for the synthesis process of SnO2 nanowire networks and SEM characterization. All authors read and approved the final manuscript. Correspondence to Lorenz Kienle. Hrkac, V., Wolff, N., Duppel, V. et al. Atomic structure and crystallography of joints in SnO2 nanowire networks. Appl. Microsc. 49, 1 (2019). https://doi.org/10.1007/s42649-019-0003-7 Tin dioxide network Atomic interface
CommonCrawl
The global Minmax k-means algorithm Xiaoyan Wang1 & Yanping Bai2 SpringerPlusvolume 5, Article number: 1665 (2016) | Download Citation The global k-means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k-means to minimize the sum of the intra-cluster variances. However the global k-means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k-means algorithm. In this paper, we modified the global k-means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k-means clustering error method to global k-means algorithm to overcome the effect of bad initialization, proposed the global Minmax k-means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k-means algorithm, the global k-means algorithm and the MinMax k-means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper. Clustering is one of classic problems in pattern recognition, image processing, machine learning and statistics (Xu and Wunsch 2005; Jain 2010; Berkhin 2006). Its aim is to partition a collection of patterns into disjoint clusters, such that patterns in the same cluster are similar, however patterns belonging to two different clusters are dissimilar. One of the most popular clustering method is k-means algorithm, where clusters are identified by minimizing the clustering error. Despite its popularity, the k-means algorithm is sensitive to the choice of initial starting conditions (Celebi et al. 2013; Peña et al. 1999; Celebi and Kingravi 2012, 2014). To deal with this problem, the global k-means algorithm has been proposed (Likas et al. 2003), and then some of its modifications (Bagirov 2008; Bagirov et al. 2011) are proposed. Even an extension to kernel space has been developed (Tzortzis and Likas 2008, 2009). A fuzzy clustering version is also available (Zang et al. 2014). All of these are incremental approaches that start from one cluster and at each step a new cluster is deterministically added to the solution according to an appropriate criterion. Using this method also can learn the number of data clusters (Kalogeratos and Likas 2012). Although the global k-means algorithm is deterministic and often performs well, but sometimes the new cluster center may be a outlier, then it may arise that some of the clusters just have single point, the result is awful. Another way to avoid the choice of initial starting conditions is to use the multi restarting k-means algorithm (Murty et al. 1999; Arthur and Vassilvitskii 2007; Banerjee and Ghosh 2004). A new version of this method is the MinMax k-means clustering algorithm (Tzortzis and Likas 2014), which starts from a randomly picked set of cluster centers and tries to minimize the maximum intra-cluster error. Its application (Eslamnezhad and Varjani 2014) shows that the algorithm is efficient in intrusion detection. In this paper, a new version of modified global k-means algorithms is proposed in order to avoid the singleton clusters. In addition, the initial positions chosen by the global k-means algorithms sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k-means algorithm. Therefore we employ the MinMax k-means clustering error method instead of k-means clustering error in global k-means algorithm to tackle this problem, obtain a deterministic algorithm called the global Minmax k-means algorithm. We do loads of experiments on different data sets, the results show that our proposed algorithm is better than other algorithms which referred in the paper. The rest of paper is organized as follows. We briefly describe the k-means, the global k-means and the MinMax k-means algorithms in "Preliminaries" section. In "The proposed algorithm" section we proposed our algorithms. Experimental evaluation is presented in "Experiment evaluation" section. Finally "Conclusions" section conclude our work. k-Means algorithm Given a data set \(X=\{x_1,x_2,\ldots ,x_N\}, x_n\in R^d (n=1,2,\ldots ,N)\). We aim to partition this data set into M disjoint clusters \(C_1,C_2,\ldots ,C_M\), such that a clustering criterion is optimized. Usually, the clustering criterion is the sum of the squared Euclidean distances between each data point \(x_n\) and the cluster center \(m_k\) that \(x_n\) belongs to. This kind of criterion is called clustering error and depends on the cluster centers \(m_1,m_2,\ldots ,m_k\): $$\begin{aligned} E\left( m_1,m_2,\ldots ,m_M\right) =\sum \limits _{i=1}^{N}\sum \limits _{k=1}^{M}I\left( x_i\in C_k\right) \Vert x_i-m_k\Vert ^2, \end{aligned}$$ $$\begin{aligned} I(X)=\left\{ \begin{array}{ll} 1,&{}\quad X{\text { is true}},\\ 0,&{}\quad {\text {Otherwise}}.\end{array}\right. \end{aligned}$$ Generally, we call \(\sum \nolimits _{k=1}^{M}I(x_i\in C_k)\Vert x_i-m_k\Vert ^2\) intra-cluster error(variance). Obviously, clustering error is the sum of intra-cluster error. Therefore, we use \(E_{sum}\) instead of \(E(m_1,m_2,\ldots ,m_M)\) in briefly, i.e. \(E_{sum}=E(m_1,m_2,\ldots ,m_M)\). The k-means algorithm finds locally optimal solutions with respect to the clustering error. The main disadvantage of the method is its sensitivity to initial position of the cluster center. The global k-means algorithm To deal with the initialization problem, the global k-means has been proposed, which is an incremental deterministic algorithm that employs k-means as a local search procedure. This algorithm obtains optimal or near-optimal solutions in terms of clustering error. In order to solve a clustering problem with M clusters, Likas et al. (2003) provided the proceeds as follows. The algorithm starts with one cluster \((k=1)\) and find its optimal position which corresponds to the data set centroid. To solve the problem with two clusters \((k=2)\) they run k-means algorithm N (N is the size of the data set) times, each time starting with the following initial positions of the cluster centers: the first cluster center is always placed at the optimal position for the problem with \(k=1\), and the other at execution n is placed at the position of the data point \(x_n(n=1,2,\ldots ,N)\). The solution with the lowest cluster error is kept as the solution of the 2-clustering problem. In general, let \((m_1^*,m_2^*,\ldots ,m_k^*)\) denote the final solution for k-clustering problem. Once they find the solution for the \((k-1)\)-clustering problem, they try to find the solution of the k-clustering problem as follows: they perform N executions of the k-means algorithm with \((m_1^*,m_2^*,\ldots ,m_{(k-1)}^*,x_n)\) as initial cluster centers for the \(n\hbox {th}\) run, and keep the solution resulting in the lowest clustering error. By proceeding in the above fashion they finally obtain a solution with M clusters and also found solutions for all k-clustering problems with \(k<M\). This version of the algorithm is not applicable for clustering on middle sized and large data sets. Two modifications were proposed to reduce the complexity (Likas et al. 2003), and we interest in the first procedure. Let \(d_{k-1}^j\) is the squared distance between \(x_j\) and the closest center among the \(k-1\) cluster centers obtained so far. In order to find the starting point for the kth cluster center, for each \(x_n\in R^d,n=1,2,\ldots ,N\) we compute \(b_n\) as follows. $$\begin{aligned} b_n=\sum \limits _{i=1}^{N}\max \left( d_{k-1}^j-\Vert x_n-x_j\Vert ^2,0\right) , \end{aligned}$$ The quantity \(b_n\) measures the reduction in the error measure obtained by inserting a new cluster center at point \(x_n\). It is clear that a data point \(x_n\in R^d\) with the largest value of the \(b_n\) is the best candidate to be a starting point for the kth cluster center. Therefore, we compute \(i=\arg \max \nolimits _{n} b_n\) and find the data point \(x_n\in R^d\) such that \(b_n=i\). This data point is selected as a starting point for the kth cluster center. The MinMax k-means algorithm As we known, in the k-means algorithm, we minimize the clustering error. Instead of this method, the MinMax k-means algorithm minimizes the maximum intra-cluster error $$\begin{aligned} E_{\max }=\max _{1\le k\le M}\sum \limits _{i=1}^{N}I(x_i\in C_k)\Vert x_i-m_k\Vert ^2, \end{aligned}$$ where \(m_k,I(x)\) are defined as (1). Since directly minimizing the maximum intra-cluster variance \(E_{\max }\) is difficult, a relaxed maximum variance objective was proposed (Tzortzis and Likas 2014). They constructed a weighted formulation \(E_w\) of the sum of the intra-cluster variances (4) $$\begin{aligned} \begin{array}{ll} E_w=\sum \limits _{k=1}^{M} w_k^p\sum \limits _{i=1}^{N}I\left( x_i\in C_k\right) \Vert x_i-m_k\Vert ^2,\\ w_k\ge 0,\sum \limits _{k=1}^{M}w_k=1, \quad 0\le p\le 1. \end{array} \end{aligned}$$ where the p exponent is a constant. The greater(smaller) the p value is, the less(more) similar the weight values become, as relative differences of the variances among the clusters are enhanced(suppressed). Now, all clusters contribute to the objective, according to different degrees regulated by the \(w_k\) values. It is clear that the more a cluster contributes (higher weight), the more intensely its variance will be minimized. So \(w_k\) are calculated by formula (5) $$\begin{aligned} w_k=v_k^{1\diagup (1-p)}\Big /\sum \limits _{k'=1}^{M} v_{k'}^{1\diagup (1-p)}, \quad {\text {where}}\, v_k=\sum \limits _{i=1}^{N}I(x_i\in C_k)\Vert x_i-m_k\Vert ^2. \end{aligned}$$ To enhance the stability of the MinMax k-means algorithm, a memory effect could be added to the weights: $$\begin{aligned} w_k^{(t)}=\beta w_k^{t-1}+(1-\beta )\left( v_k^{1\diagup (1-p)}\Big / \sum \limits _{k'=1}^{M} v_{k'}^{1\diagup (1-p)}\right) ,\quad 0\le \beta \le 1. \end{aligned}$$ The proposed algorithm The modified global k-means algorithm As we known, the global k-means algorithm may obtain singleton clusters if the initial centers are outliers. To avoid this, we propose the Modified global k-means algorithm. Algorithm 1: The Modified global k-means Algorithm 1. Step 1 (Initialization) Compute the centroid \(m_1\) of the data set X: $$\begin{aligned} m_1=\frac{1}{N}\sum \limits _{i=1}^{N}x_i,\,x_i\in X,\quad i=1,2,\ldots ,N. \end{aligned}$$ and \(k=1\); Step 2 (Stopping criterion) Set \(k=k+1\). If \(k>M\), then stop; Step 3 Take the centers \(m_1,m_2,\ldots ,m_{k-1}\) from the previous iteration and consider each point \(x_i\) of X as a starting point for the kth cluster center, thus obtain N initial solutions with k points \((m_1,m_2,\ldots ,m_{k-1},x_i)\); Step 4 Apply the k-means algorithm to each of them; keep the best k-partition obtained and its centers \(y_1,y_2,\ldots ,y_k\); Step 5 (Detect the singleton clusters) If the obtained clusters exist singleton cluster, then delete the point \(y_k\) in candidate initial center X, and go to step 3, else go to step 6; Step 6 Set \(m_i=y_i,\,i=1,2,\ldots ,k\,\) and go to step2. Due to high computational cost of the global k-means algorithm, we propose the fast algorithm. It is based on the idea as the fast global k-means variant proposed in Peña et al. (1999). The steps 1, 2, 6 are same to the Algorithm 1. Steps 3, 4, 5 is modified as follows: Step 3′ Take the centers \(m_1,m_2,\ldots ,m_{k-1}\) from the previous iteration and consider each point \(x_i\) of X as a starting point for the kth cluster center, then calculate \(b_i\) using Eq. (2), choose the corresponding starting point of maximum \(b_i\) as the best solution; Step 4′ Apply the k-means algorithm to the best solution; keep the best k-partition obtained and its centers \(y_1,y_2,\ldots ,y_k\); Step 5′ (Detect the singleton clusters) If the obtained clusters exist singleton cluster \(b_i\), then let \(b_i=0\), and go to step 3, else go to step 6; In our numerical experiments we use Algorithm 2. Our proposed algorithm based on realistic data set. The data set includes 41 students scores, and each student has 11 subjects grades. When we use the global k-means algorithm to cluster students according to their scores of subjects, the output is bad. The comparisons between the global k-means algorithm and the modified global k-means algorithm in Table 1. Table 1 Comparative results Table 1 shows when we partition the data for four clusters, there are two clusters just include one element in the global k-means algorithm, i.e. there are two singleton clusters in the global k-means algorithm. We also find that the \(E_{sum}\) of modified global k-means is more lower than that of global k-means. The global k-means algorithm is a deterministic global search procedure from suitable initial positions, but the initial positions sometimes are poor. An example is illustrated in Fig. 1. The MinMax k-means algorithm was verified effective and robust over bad initializations (Murty et al. 1999), but its not deterministic, it needs multiple restarts. So we combine the global k-means algorithm and the MinMax k-means algorithm, i.e. we apply MinMax k-means clustering error method to the global k-means algorithm, then we get a deterministic algorithm called the global Minmax k-means algorithm. Example a is the initial point for \(k=2\) using the global algorithm, and it's clear that it is a bad initial point. Example b shows a better initial point The global Minmax k-means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable positions like the global k-means algorithm, and this procedure was introduced in preliminaries. After choose the initial center, we employ the MinMax k-means method to minimize the maximum intra-cluster variances. The MinMax k-means algorithm was described in preliminaries. The whole method of the proposed algorithm is illustrated as Algorithm 3. Algorithm 3: The global Minmax k-means algorithm. Step 1 (Initialization) Compute the centroid \(m_1\) of the set X, using (7). Step 3 Take the centers \(m_1,m_2,\ldots ,m_{k-1}\) from the previous iteration and consider each point \(x_i\) of X as a starting point for the kth cluster center, thus obtaining N initial solutions with k points \((m_1,m_2,\ldots ,m_{k-1},x_i)\); Step 4 Apply the MinMax k-means algorithm to each of them; keep the best k-partition obtained and its centers \(y_1,y_2,\ldots ,y_k\); Step 5 (Detect the singleton clusters) If the obtained clusters exist singleton cluster, then the candidate initial center delete the point \(y_k\), and go to step 3, else go to step 6; Step 6 Set \(m_i=y_i,\,i=1,2,\ldots ,k\,\) and go to step 2. Experiment evaluation In the following subsections we provide extensive experimental results comparing the global Minmax k-means algorithm with k-means algorithm, the global k-means algorithm and the Minmax k-means algorithm. In the experiments, the results of k-means algorithm and the MinMax k-means algorithm are the average of \(E_{max}\) \(E_{sum}\) defined by (3) (1) , which restart 100 times. For the MinMax k-means algorithm and the global Minmax k-means algorithm, some additional parameters (\(\beta ,p\)) must be fixed prior to execution. In Tzortzis and Likas (2014), there gives a practical framework that extends the MinMax k-means to automatically adapt the exponent p to the data set. It begins with a small p (\(p_{init}\)) that after each iteration is increased by \(p_{step}\), until a maximum value p (\(p_{max}\)) is attained. As the method, we should decide parameter \(p_{init}\), \(p_{max}\) and \(p_{step}\) at first. We set \(p_{init}=0,\,p_{step}=0.01\) and using p instead of \(p_{max}\) for all MinMax k-means and global Minmax k-means algorithm experiments. In Tables 2, 3 and 8, we did not mark the value of parameter p, since for different p has the same result. Table 2 Comparative results on \(S_1\) data set Synthetic data sets Four typical synthetic data sets \(S_1,S_2,S_3,S_4\) are tested in this section, as in Fang et al. (2013). Typically, they are generated from a mixture of four or three bivariate Gaussian distribution on the plane coordinate system. Thus a cluster takes the form of a Gaussian distribution. Particularly, all the Gaussian distribution have the covariance matrices have the form of \(\sigma ^{2}I\), where \(\sigma \) is the standard variance. For the first three data sets, four Gaussian distributions, all with 300 sample points, are all located at \((-1,0),(1,0),(0,1)\) and \((0,-1)\), respectively, and their standard variances \(\sigma \) keep the same, but vary with the data sets. Actually, \(\sigma \) takes the values of 0.2, 0.3, 0.4 for \(S_1,S_2,S_3\), respectively. In this way, the degree of overlap among the clusters increases considerably from \(S_1\) to \(S_3\) and therefore the corresponding classification problem becomes more complicated. As for \(S_4\), we give three Gaussian distributions located at (1, 0), (0, 1) and \((0,-1)\), with 400, 300, 200 sample points, respectively. Therefore, \(S_4\) represents the asymmetric situation where the clusters do not take the same shape, and also with different number of sample points. The data sets are shown in Fig. 2 respectively. The sketch of four typical synthetic data sets: a \(S_1\), b \(S_2\), c \(S_3\), d \(S_4\) Real-world data sets Coil-20 is a data set (Nene et al. 1996), which contains 72 images taken from different angels for each of the 20 included objects. We used three subsets Coil15, Coil8, Coil19, with images from 15, 18 and 19 objects, respectively, as the data set in Tzortzis and Likas (2014). The data set includes 216 instances and each of the data has 1000 features. Iris(UCI) (Frank and Asuncion 2010) is a famous data set which created by R.A. Fisher. There are 150 instances and 50 in each of three classes. Each data has four predictive attributes. Seeds(UCI) (Frank and Asuncion 2010) is composed of 210 records that extract from three different varieties of wheat. The number of each grain is equal and each grain is described by seven features. Yeast(UCI) (Frank and Asuncion 2010) includes 1484 instances about the cellular localization sites of proteins and eight attributes. Proteins belong to ten categories. Five of the classes are extremely under represented and are not considered in our evaluation. The data set is unbalanced. Pendigits(UCI) (Frank and Asuncion 2010) includes 10,992 instances of handwritten digits (0–9) from the UCI repository (Eslamnezhad and Varjani 2014), and 16 attributes. The data set is almost balanced. User Knowledge Modeling (UCI) (Frank and Asuncion 2010) is about the students' knowledge status about the subject of Electrical DC Machines. User Knowledge Modeling includes 403 instances with 6-dimensional space. The data set is unbalanced. The students are assessed four levels. In the experiment, the sample data of Iris, Seeds and Pendigits data set will be normalized using z-score method firstly and the algorithm will be implemented on the normalized data. A summary of the data sets is provided in Table 4. The comparison of the algorithms across the various data sets is shown in Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12, except Table 6. In Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12, first, we find that the global Minmax k-means algorithm attains better \(E_{max}\) than k-means algorithm and global algorithm, and in most of cases it better than the MinMax k-means algorithm, sometimes equal to the MinMax k-means algorithm. Second, the proposed method outperforms k-means algorithm for all the metrics reported in Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 except in Table 3, which get the same result for all algorithms. Third, the global Minmax k-means algorithm can reach the lowest \(E_{sum}\), except in Tables 7 and 10. As our method employs both the global k-means and the MinMax k-means algorithm, it perform better than each of the algorithm or sometimes attain the same effect. In Tables 4, 5, 11 and 12, our proposed method attain both the lowest \(E_{max}\) and the \(E_{sum}\). In Table 11, although global k-means reach the lowest \(E_{sum}\) too, but when it attain the point, its \(E_{sum}\) is bigger than ours. In Tables 4 and 5, the MinMax k-means algorithm also can reach the lowest \(E_{max}\), but it can not attain the lowest \(E_{sum}\). In Tables 7 and 10, the proposed method can not result the lowest \(E_{sum}\), but just the method can attain the lowest \(E_{max}\). In Tables 2 and 9, all algorithms except k-means make the equal effect. In Table 8, MinMax k-means and global Minmax k-means algorithm run in the same result. They are better than k-means and global k-means. Table 6 The brief description of the real data sets Table 7 Comparative results on the Coil2 data set Table 8 Comparative results on the Iris data set Table 9 Comparative results on the Seeds data set Table 10 Comparative results on the Yeast data set Table 11 Comparative results on the Pendigit data set Table 12 Comparative results on the user knowledge modeling data set In the experiment, we find the memory parameter \(\beta \) and exponent parameter p affect the results in the MinMax k-means and the global Minmax k-means algorithm, and the variation does not have any rule. The practical framework that extends the MinMax k-means to automatically adapt the exponent to the data set proposed in Tzortzis and Likas (2014). They thought if the \(p_{max}\) has been set, the programme can reach the lowest \(E_{max}\) at \(p\in [p_{init},p_{max}]\). However, our experiments show that it is not always correct. In Tables 10 and 11, when we set \(p_{max}=0.3\), the results is better than \(p_{max}=0.5\). In the experiment, it is easy to show that \(E_{max}\) and \(E_{sum}\) can not attain the lowest value at a time. We modified the global k-means algorithm to circumvent the singleton clusters. We also have presented the global Minmax k-means algorithm, with constitutes a deterministic clustering method in terms of the MinMax k-means clustering error i.e. minimize the maximum intra-cluster error. The method is independent of any starting conditions and compares favorably to the k-means algorithm and the MinMax k-means algorithm with multiple random restarts. We compare our method with the global k-means algorithm, too. The results of experiments show the advantage come together with the global k-means and the MinMax k-means algorithm i.e. we get a deterministic clustering method and need not any restart and our proposed algorithm always performs well. As for future work, we plan to study in adapt method to determine the exponent parameter p and the memory parameter \(\beta \), such that \(E_{max}\) or \(E_{sum}\) attain the lowest. And it would be better for us to tackling the two parameters at one time. Arthur D, Vassilvitskii S (2007) k-means++: the advantages of careful seeding. In: ACM-SIAM symposium on discrete algorithm (SODA), pp 1027–1035 Bagirov AM (2008) Modified global k-means algorithm for minimum sum-of-squares clustering problems. Pattern Recognit 41:3192–3199 Bagirov AM, Ugon J, Webb D (2011) Fast modified global k-means algorithm for incremental cluster construction. Pattern Recognit 44:866–876 Banerjee A, Ghosh J (2004) Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres. IEEE Trans Neural Netw 15(3):702–719 Berkhin P (2006) A survey of clustering data mIning techniques. In: Kogan J, Nicholas C, Teboulle M (eds) Grouping multidimensional data: recent advances in clustering. Springer, Berlin, pp 25–71 Celebi ME, Kingravi H (2012) Deterministic initialization of the K-means algorithm using hierarchical clustering. Int J Pattern Recognit Artif Intell 26(7):1250018 Celebi ME, Kingravi H (2014) Linear, deterministic, and order-invariant initialization methods for the K-means clustering algorithm. In: Celebi ME (ed) Partitional clustering algorithms. Springer, Berlin, pp 79–98 Celebi ME, Kingravi HA, Vela PA (2013) A comparative study of efficient initialization methods for the k-means clustering algorithm. Expert Syst Appl 40:200–210 Eslamnezhad M, Varjani AY (2014) Intrusion detection based on MinMax K-means clustering. In: 2014 7th International symposium on telecommunications (IST'2014), pp 804–808 Fang C, Jin W, Ma J (2013) \(k^{{\prime }}\)-Means algorithms for clustering analysis with frequency sensitive discrepancy metrics. Pattern Recognit Lett 34:580–586 Frank A, Asuncion A (2010) UCI machine learning repository. http://archive.ics.uci.edu/ml Jain AK (2010) Data clustering: 50 years beyond K-means. Pattern Recognit Lett 31:651–666 Kalogeratos A, Likas A (2012) Dip-means: an incremental clustering method for estimating the number of clusters. In: Advances in neural information processing systems (NIPS), pp 2402–2410 Likas A, Vlassis N, Verbeek JJ (2003) The global k-means clustering algorithm. Pattern Recognit 36:451–461 Murty MN, Jain AK, Flynn PJ (1999) Data clustering: a review. ACM Comput Surv 31(3):264–323 Nene SA, Nayar SK, Murase H (1996) Columbia Object Image Library (COIL-20). Technical Report CUCS 005-96 Peña JM, Lozano JA, Larrañaga P (1999) An empirical comparison of four initialization methods for the K-means algorithm. Pattern Recognit Lett 20:1027–1040 Tzortzis GF, Likas AC (2009) The global kernel k-means algorithm for clustering in feature space. IEEE Trans Neural Netw 20(7):1181–1194 Tzortzis G, Likas A (2014) The MinMax k-Means clustering algorithm. Pattern Recognit 47:2505–2516 Tzortzis G, Likas A (2008) The global kernel k-Means algorithm. In: International joint conference on neural networks (IJCNN), pp 1977–1984 Xu R, Wunsch DC (2005) Survey of clustering algorithms. IEEE Trans Neural Netw 16(3):645–678 Zang X, Vista FP IV, Chong KT (2014) Fast global kernel fuzzy c-means clustering algorithm for consonant/vowel segmentation of speech signal. J Zhejiang Univ Sci C (Comput Electron) 15(7):551–563 XW and YB proposed and designed the research; XW performed the simulations, analyzed the simulation results and wrote the paper. Both authors read and approved the final manuscript. The authors are thankful for the support of the National Natural Science Foundation of China (61275120, 61203228, 61573016). School of Information and Communication Engineering, North University of China, Taiyuan, 030051, People's Republic of China Xiaoyan Wang School of Science, North University of China, Taiyuan, 030051, People's Republic of China Yanping Bai Search for Xiaoyan Wang in: Search for Yanping Bai in: Correspondence to Yanping Bai. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. MinMax k-means Global k-means
CommonCrawl
Travis Giggy Home Blog Miki Contact 115.020.20.10 Reading 7 - 10. Kurtosis in Return Distributions 10. Kurtosis in Return Distributions j. explain skewness and the meaning of a positively or negatively skewed return distribution; k. describe the relative locations of the mean, median, and mode for a unimodal, nonsymmetrical distribution; l. explain measures of sample skewness and kurtosis; What is Kurtosis? Kurtosis is a measure that tells us how more-or-less "peaked" a distribution is compared to a normal distribution, based on the size of the distribution's tails. What is "platykurtic"? What does it mean? It is a distribution with smaller tails that is less peaked than a normal distribution. It means a return distribution has more returns with large deviations from the mean. What is "leptokurtic"? What does it mean? It is a distribution with relatively larger tails and means the distribution is more peaked than a normal distribution. It represents return distributions with deviations clustered around the mean. What is "mesokurtic"? A distribution with the same kurtosis as the normal distribution is called "mesokurtic." Why is kurtosis critical in risk management settings? It is critical because most securities returns exhibit both skewness and kurtosis. Most risk managers focus more on kurtosis than standard deviation because it focuses on the distribution of returns in the tails of the distribution, since this is where the risk is. What is the mathematical formula to calculate skewness? What does the value mean? $$\frac{\sum (X_i - \mu)^3}{N \sigma^3}$$ - \mu = population mean - \sigma = standard deviation - The normal distribution has a skew of 0 because it is symmetric - If skewness is positive, it means that the average magnitude of positive deviations is larger than the average magnitude of negative deviations © Travis Giggy
CommonCrawl
Lyell Collection home Geological Society home Issue in progress Supplementary publications GSL fellows Other member types GSL Fellows access Other member type access eTOC alerts GSL blog Geological Society of London Publications Engineering Geology Special Publications Geochemistry: Exploration, Environment, Analysis Journal of Micropalaeontology Journal of the Geological Society Petroleum Geology Conference Series Petroleum Geoscience Proceedings of the Yorkshire Geological Society Quarterly Journal of Engineering Geology and Hydrogeology Quarterly Journal of the Geological Society Scottish Journal of Geology Transactions of the Edinburgh Geological Society Transactions of the Geological Society of Glasgow Transactions of the Geological Society of London Follow gsl on Twitter Visit gsl on Facebook Visit gsl on Youtube Visit gsl on Linkedin A deep geothermal exploration well at Eastgate, Weardale, UK: a novel exploration concept for low-enthalpy resources D.A.C. Manning, P.L. Younger, F.W. Smith, J.M. Jones, D.J. Dufton and S. Diskin Journal of the Geological Society, 164, 371-382, 1 March 2007, https://doi.org/10.1144/0016-76492006-015 D.A.C. Manning 22Institute for Research on the Environment and Sustainability, Newcastle University, Newcastle upon Tyne NE1 7RU, UK 11School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU, UK (e-mail: [email protected]) P.L. Younger F.W. Smith 33FWS Consultants Ltd, Merrington House, PO Box 11, Merrington Lane Trading Estate, Spennymoor DL16 7UU, UK 44March House, Horsley NE15 0HZ, UK D.J. Dufton 55PB Power, William Armstrong Drive, Newcastle upon Tyne NE4 7YQ, UK S. Diskin 66Foraco–Boniface SA, ZI des Fournels, BP 173, 34401 Lunel, France The first deep geothermal exploration borehole (995 m) to be drilled in the UK for over 20 years was completed at Eastgate (Weardale, Co. Durham) in December 2004. It penetrated 4 m of sandy till (Quaternary), 267.5 m of Lower Carboniferous strata (including the Whin Sill), and 723.5 m of the Weardale Granite (Devonian), with vein mineralization occurring to 913 m. Unlike previous geothermal investigations of UK radiothermal granites that focused on the hot dry rock concept, the Eastgate Borehole was designed to intercept deep fracture-hosted brines associated with the major, geologically ancient, hydrothermal vein systems. Abundant brine (≤46 °C) was encountered within natural fracture networks of very high permeability (transmissivity c. 2000 darcy m) within granite. Evidence for the thermal history of the Carboniferous rocks from phytoclast reflectance measurements shows very high values (≥3.3%) indicating maximum temperatures of 130 °C prior to intrusion of the Whin Sill. Geochemical analysis of cuttings samples from the Eastgate Borehole suggests radiothermal heat production rates for unaltered Weardale Granite averaging 4.1 μW m−3, with a mean geothermal gradient of 38 °C km−1. The Eastgate Borehole has significant exploitation potential for direct heat uses; it demonstrates the potential for seeking hydrothermal vein systems within radiothermal granites as targets for geothermal resources. Geothermal energy is used globally in applications ranging from electricity generation (using steam to drive turbines) to space heating using ground source heat pumps (GSHP), which extract energy from shallow groundwater and/or soil atmospheres. In Europe, geothermal energy use depends on the balance of availability of differing energy sources. Thus Iceland, with readily available volcanogenic geothermal resources, is one of the world's major users for space heating. The UK has been very slow to develop its more modest geothermal resources, with only one significant scheme (≤1.4 MW; a single geothermal well contributing to Southampton's district heating scheme), and a growing number of GSHP applications for individual buildings (mainly new-build domestic premises). The slow uptake in the UK partly reflects the ready availability of indigenous oil and gas over the last three decades. Norway and Sweden show the contrast in geothermal uptake between countries with and without hydrocarbon resources: Norway makes almost no use of geothermal energy, whereas Sweden is one of Europe's greatest users, entirely in the form of GSHP (Sanner et al. 2003). As energy costs rise, especially for fossil fuels, interest in alternative sources is increasing. Globally, nuclear energy and coal (with varying degrees of 'cleanliness' in burning) will be used increasingly for electricity generation during the 21st century. Interest in renewable sources of energy, such as wind power, is also increasing. But geothermal energy can play a much greater role within a portfolio of energy supply options than is currently considered. There are potentially many locations where geothermal energy is insufficient to generate electricity, but where it can contribute to space heating and other low-grade uses currently met by distributed fossil fuel combustion. Displacement of the domestic gas boiler from tens of millions of homes would contribute more to the UK's greenhouse gas emission reduction obligations than almost any other technological change (cf. Caneta Research 1999). In Weardale, County Durham, UK, the recent decommissioning of the Lafarge cement works has led to a major redevelopment opportunity at a rural site. The Weardale Task Force anticipates a mixed-use redevelopment, making use of indigenous renewable energy sources, including wind and hydroelectric power generation, and biomass combined heat and power. Geothermal energy is being considered for space heating and for use in a spa tourist attraction. The concepts underlying the geothermal exploration activity undertaken at the Eastgate site are novel, and represent a significant development from those previously applied to the inferred geothermal resources of the UK. Initiated in 1976 at the time of the Middle East oil crisis, geothermal exploration in the UK focused until its end in 1990 on the potential for generating electricity. The cost-effective generation of electricity from geothermal resources (DiPippo 2005) generally requires that waters yielded by production wells have temperatures considerably in excess of 100 °C, so that on exposure to atmospheric (or near-atmospheric) pressure, they will 'flash', i.e. liberate steam at rates high enough to drive conventional turbines. Additionally, binary cycles (including the Kalina cycle) are used for power generation in which geothermal waters at <140 °C vaporize other 'working fluids' to produce gas streams sufficient to turn turbines (DiPippo 2005). Outside active volcanic areas, there are few parts of the world in which economically accessible geothermal waters are hot enough for flash or binary cycle electricity generation. However, there are many direct uses that can exploit the thermal energy in sub-100 °C geothermal waters (e.g. space or district heating; greenhouse heating; aquaculture; health spa tourism developments, etc.; Lund et al. 1998; Dickson & Fanelli 2005). These options were not a priority during the 1976–1990 investigations in the UK. Rather, two principal options for deriving supra-100 °C waters from the UK continental crust were examined (Barker et al. 2000). (1) Hydrothermal aquifers: corresponding principally to Mesozoic basins, in which permeable formations are likely to be present at depths that will ensure that they contain hot water. (2) Hot dry rock (HDR): in which the heat generated by radiothermal granites was to be exploited by drilling deep boreholes, between which fractures would be developed artificially (e.g. hydraulic fracturing; explosives). Cool water would be pumped down into the fractured granite, left to equilibrate thermally, and then pumped out again at much higher temperatures. In many ways, both of these exploration concepts were decades ahead of their time. Although the hydrothermal aquifer investigations did not result in the identification of flash or binary grade energy, they did identify resources suitable for direct-use applications, as used to the present day by the Southampton Geothermal Heating Company. Many other similar resources await exploitation. Although the HDR experiments undertaken in the Carnmenellis Granite (Cornwall), using boreholes sunk at Rosemanowes Quarry to depths of 1700–2600 m, did not result in an operational geothermal exploitation scheme (Richards et al. 1994), they did yield a dataset used widely to test numerical simulation codes (Kolditz & Clauser 1998). They provided much of the conceptual basis upon which the European Union pilot project at Soultz-sous-Forêts (Rhine Graben) has successfully built (e.g. Bachler & Kohl 2005), using artificial fracturing to enhance natural structures in granite to obtain an exploitable resource, which is now generating electricity using a binary power plant. Given their focus on resources suitable for electricity generation, the 1976–1990 investigations did not fully consider the direct use for space heating of deep groundwater or GHSP resources. They were also too early to investigate the possibilities of man-made hydrothermal aquifers now associated with saturated mine workings at great depth (>1000 m), which have become flooded only since 1990 (see Younger & Adams 1999; Banks et al. 2004). In this paper, we present a further exploration possibility: the concept that ancient hydrothermal vein structures associated with the radiothermal granites of the UK still function as geothermal plumbing systems, and may be economically viable. In testing this concept, we have sunk the first deep geothermal exploration borehole to be drilled in the UK for 20 years, and only the second borehole ever to penetrate the Weardale Granite, a key component of the classic 'block-and-basin' geology of the Carboniferous of northern England (e.g. Fraser & Gawthorpe 2003). Geological background Weardale lies within the UK's first Geopark, designated in recognition of the importance of the North Pennines in shaping views on the origin of Mississippi-Valley Type mineral deposits. The North Pennines Orefield is famous (Dunham 1990) for its zoned fluorite–sphalerite–galena–barite mineralization, which is principally developed in Lower Carboniferous limestones. The zoning of the orefield stimulated geophysical investigations that demonstrated a gravity 'low' coinciding with the fluorite zone of the mineralization (Bott & Masson-Smith 1953, 1957), which was interpreted as being due to the presence of a buried granite. Drilling at Rookhope in 1960 (Dunham et al. 1965) proved the existence of the Weardale Granite, which unexpectedly proved to be Early Devonian in age, hence older than the mineralized host rocks, a finding that required a complete revision of ore deposit models for the North Pennines (Dunham 1990). Interest in the Weardale Granite as a 'radiothermal' granite is based on its comparatively high concentrations of uranium, thorium and potassium. It is one of a family of similar granites in the UK with high heat production rates (Webb et al. 1985, 1987; Downing & Gray 1986), including the Carnmenellis Granite (the former HDR prospect in Cornwall; Richards et al. 1994). In the late 1980s, investigations were carried out on a tepid saline water found at depth in Cambokeels Mine at Eastgate (Manning & Strutt 1990), issuing from the eastern forehead of the fluorite-bearing Slitt Vein where it cuts Dinantian limestones and clastic sediments. Manning & Strutt (1990) compared the Cambokeels mine water with saline waters, originating from fractures in the Carnmenellis Granite, reported by Edmunds et al. (1984). They suggested that the Eastgate mine water was derived from deep within the Weardale Granite, and that it had precipitated silica minerals during its ascent through the Slitt Vein fracture system. Use of geochemical thermometers suggested that the water had equilibrated with a granitic host at temperatures up to 150 °C. There is further evidence of a high geothermal gradient in the Eastgate–Rookhope area. Downing & Gray (1986) reported a temperature gradient of 30 °C km−1 for the Rookhope Borehole, and Younger (2000) reported a gradient of over 60 °C km−1 from the Frazer's Grove Mine (Greencleugh Vein, similar in structure and filling to the Slitt Vein), several kilometres west of the Rookhope Borehole and directly above a mineralization spreading centre (Dunham 1990). On the basis of the geological information summarized above, it was decided that the geothermal exploration well at Eastgate should commence within the Slitt Vein as a suspected pathway for deep water upflow, and attempt to follow the vein and associated splays vertically downwards for up to 1 km (Fig. 1), the total depth being limited by the available budget. A starting position above the sub-crop of the Slitt Vein against the base of the Quaternary deposits was identified by trial-pitting and drilling five inclined boreholes (at 45–60°) to depths of up to 60 m. The final borehole site was precisely surveyed by global positioning system (GPS) with reference to the National Grid, and lies at 393890.932 E 538200.147 N, commencing at a surface elevation of 250.867 m above Ordnance Datum (AOD). In the interests of speed and economy, the deep borehole was drilled 'open-hole' (by Foraco, Lunel, France), recovering cuttings at 1 m intervals (within sedimentary rocks and granite) down to 615 m, and then at 5 m intervals (granite). From surface to 93 m the well was cased and grouted to 13$$mathtex$$\(\frac{3}{\ 8}\)$$mathtex$$ inches, from 93 m to 403 m cased and grouted to 9$$mathtex$$\(\frac{5}{\ 8}\)$$mathtex$$ inches (with this casing being continued to surface). From 403 m to full depth at 995 m, the borehole was completed without casing, at a drilled diameter of 8$$mathtex$$\(\frac{1}{\ 2}\)$$mathtex$$ inches. Schematic cross-section to illustrate the design of the Eastgate Borehole. Cuttings taken from the well were washed and their lithologies logged on site. A complete suite of cutting samples was deposited with the British Geological Survey. Selected samples of cuttings from the sedimentary sequence were taken for vitrinite reflectance determination, and samples of cuttings from the granite were taken at intervals of c. 50 m for analysis using X-ray fluorescence (XRF; fused beads for major elements; powder pellets for trace elements) at the University of Leicester, Department of Geology. Water samples were taken for analysis every 10–20 m down to 300 m, and then at 50 m intervals. Specific electrical conductance ('conductivity') and temperature of the water arising from the borehole were monitored continuously at the cuttings separator at the wellhead. Summary of findings: geology The Eastgate Borehole penetrated 271.5 m of recent and Lower Carboniferous cover rocks then nearly 723 m of basement granite. Overall, the sequence closely resembled that penetrated by the Rookhope Borehole (Dunham et al. 1965; Fig. 2). Superficial deposits of Recent and Quaternary age were encountered down to rockhead at about 4 m (all depths in this section are below drilling table, which was at 252.75 m AOD), and consisted mostly of sand, gravel and boulders. The borehole was open-holed to 10 m without casing, so that there was considerable cross-contamination of rock cuttings with caved drift material over that interval. Summary lithological log, comparing Rookhope and Eastgate boreholes. Lst, limestone; qtz, quartz; fluor, fluorite; gal, galena; py, pyrite; hm, hematite. The Scar Limestone was present from rockhead to about 12 m below drilling table, and was karstified (the shallow inclined boreholes penetrated caves that were entirely filled with damp mud). This was underlain by the 'Alternating Beds', a sequence of thinly bedded limestones, mudstones and sandstones. The Tynebottom Limestone was recognized between 39.5 and 51.5 m. As with the other horizons below the Scar Limestone, this formation was heavily mineralized (by a combination of quartz replacements and veinlets); it was also the source of a significant increase in groundwater influx to the borehole. The Jew Limestone was encountered between 76 and 83 m, again mineralized. Immediately beneath it (83–87 m) was a vein of coarse green fluorite, interpreted as a branch of the Slitt Vein. The Great Whin Sill, extensively carbonated and altered to 'white whin', was intercepted at 92 m, and continued to 158.5 m. Rocks beneath the Great Whin Sill were heavily silicified and veined by quartz down to about 175.5 m. Beneath this the sequence was essentially devoid of mineralization for almost 50 m, and the Smiddy, Upper and Lower Peghorn, and Birkdale Limestones were easily recognized. The Robinson Limestone (223.5–234.5 m) was fractured and heavily replaced by quartz and calcite. Similar mineralization continued into the Melmerby Scar Limestone (239.5–249.5 m), below which the Orton Group and Basement Beds (sandstones with occasional thin limestones) occurred between 249.5 and 272.5 m, cut by veinlets of mineralization. The granite surface at 272.5 m below table (271.5 m below surface) was marked by the occurrence of cuttings of a sticky white clay, presumably kaolinitic, over an interval of 0.5 m. The first few metres of granite were relatively soft, then became harder and more coherent. The granite resembles the cored material from the Rookhope Borehole, and is fairly uniformly coarse grained (2–6 mm), consisting of feldspars, quartz, muscovite and biotite. It has a greenish hue, probably the result of hydrothermal alteration of the feldspars. Mineralization was found throughout the granite, usually as quartz veinlets with sporadic occurrences of pyrite, chalcopyrite or galena. Cuttings from 415 to 615 m were uniform, with sparse indications of mineralization. Below 615 m cuttings were taken at intervals of 5 m. Fluorite mineralization was intersected between 620 and 650 m (recognized in waste cuttings, rather than in the samples collected at 5 m intervals). Minor quartz–pyrite veinlets were encountered at 655.5 m, associated with sticky white clay (presumably kaolinite). White quartz veins were encountered at 690 m and 720–721 m. Deeper, three fine-grained quartz veins with pyrite and hematite were encountered (740–742 m, 888.5 m, 912–913.5 m). Comparing the depths at which different marker horizons have been encountered in other boreholes and at outcrop, it appears that the pre-Carboniferous surface was very flat (8 m or so relief over the 10 km distance between Rookhope and Eastgate). This suggests that future boreholes can be planned with greater confidence using existing deep borehole data. In summary, typical North Pennine Orefield mineralization was found down to about 720 m, with more complex mineralization beneath that depth. These observations suggest that the borehole followed the Slitt Vein structure down to 720 m. Below this depth the Slitt Vein may have died out, or more probably deviated too far from the borehole azimuth to be recognized. In addition to the lithological description, phytoclast reflectance (individual macerals were not distinguished because of the high maturity) was determined on cuttings from the sedimentary sequence and compared with those obtained for material from the Rookhope core (Table 1). Reflectance data are similar for both boreholes, and clearly show the contact metamorphic effects of the Great Whin Sill, with a regular profile above the sill (Fig. 3; cf. Creaney 1980), perturbed by the Little Whin Sill in the Rookhope Borehole (the Eastgate Borehole started lower in the succession than this). Below the Great Whin Sill, Figure 3 shows considerable scatter in the profile of reflectance measurements. This may be due to caving from higher in the borehole prior to the installation of casing. Immediately above and below the Whin Sill, reflectance values reach 9% or more, consistent with contact temperatures in excess of 400 °C (Karweil 1955). Furthest below the Great Whin Sill, reflectance values for the lowest two samples are remarkably similar: 3.52% and 3.37% for Eastgate and 3.73% and 3.54% for Rookhope. The similarity between these values and the profiles shown in Figure 3 suggests that the heat flow from the granite to the overlying sediments was very similar for the two locations, which, although separated by 10 km, have identical intervals between the base of the Great Whin Sill and the top of the Weardale Granite (114 m). Phytoclast and vitrinite reflectance data for sediments from the Eastgate and Rookhope boreholes Phytoclast reflectance profiles for Carboniferous sediments from the Eastgate and Rookhope boreholes. Although it is difficult to recast vitrinite reflectance data into palaeotemperatures, it can be assumed that the very high reflectance values reported here (not less than 3.3%) correspond to maximum temperatures of the order of 130 °C close to the contact between the overlying sediments and the granite. These temperatures may have been reached prior to intrusion of the Great Whin Sill (305 Ma; Fitch & Miller 1967; recalculated by Cann & Banks 2001), which, according to the sill emplacement model of Goulty (2005), was into sediments with an approximate depth of 1.4 km (total sediment thickness on top of the granite of 1.5 km). This is consistent with the conclusion of Dunham (1987) that the Weardale area had exceptionally high geothermal gradients prior to the emplacement of the Whin Sills. Geochemistry of the Weardale Granite: implications for heat production Limited geochemical analyses of selected samples from the granite were made to evaluate its heat production potential, and to provide additional geological information and indicate the degree of uniformity of the rock. Sixteen samples of cuttings were selected at c. 50 m intervals, and analysed for major and trace elements using XRF (Table 2). Originating from a borehole drilled in conditions that were aggressive towards the drilling equipment, the cuttings were contaminated with tungsten, molybdenum and chromium (metals associated with hardened tools), as well as iron. Despite this contamination, compositional data are similar to those reported by Brown et al. (1987) for core samples of the Weardale Granite from the Rookhope Borehole. Chemical composition of the Weardale Granite, Eastgate Borehole Downhole homogeneity is readily assessed from data for Na2O, K2O and CaO (Fig. 4). For both the Rookhope and Eastgate samples, the top 200 m of the granite show considerable variation. Below this depth, these oxides are relatively constant. Similarly, the trace elements Rb and Sr, and the naturally radioactive elements Th and U show variable behaviour above, and greatest values below, 200 m depth within the granite, in both boreholes (Figs 5 and 6). Distribution with depth of CaO, Na2O and K2O within the Weardale Granite for the Eastgate Borehole (this study) and the Rookhope Borehole (Brown et al. 1987). Distribution with depth of Rb and Sr within the Weardale Granite for the Eastgate Borehole (this study) and the Rookhope Borehole (Brown et al. 1987). Variation within the top 200 m of the granite may partly be due to the presence of fluorite mineralization (consistent with observed F contents; Table 2) as well as weathering prior to the deposition of the overlying sediments. Similarly Dunham et al. (1965) showed considerable enrichment in Al2O3 (and K2O) within the upper 25 m of the granite, consistent with the development of an illitic or micaceous palaeosol. The heat production capacity of the granite can be calculated from the chemical analysis (Downing & Gray 1986), using the equation (P. C. Webb, pers. comm.) $$mathtex$$\[A=0.1326{\rho}(0.718U+0.193Th+0.262K)\]$$mathtex$$ where A is heat production in μW m−3, ρ is density in g cm−3, U is uranium content in mg kg−1, Th is thorium content in mg kg−1 and K is potassium content in element percent. For the purpose of calculation, the specific gravity of the granite was assumed to be 2.63 (Dunham et al. 1965); estimation of specific gravity for each sample by measuring the displacement of 100 ml of water by 100 g of cuttings to obtain their volume, agitating ultrasonically to remove occluded air, gave a value of 2.58. Figure 7 shows downhole variations in the calculated heat production capability of the granite at Eastgate, compared with recalculated values for the Rookhope Borehole using geochemical data from Brown et al. (1987). For the Eastgate samples in general, heat production values rise from 3 μW m−3 to an average value of 4.1 μW m−3 below 200 m depth from the granite surface. Observed reductions in heat production values at depths >400 m within the granite may be due to quartz veining, and have been excluded in the calculation of the average heat production value. Distribution with depth of Th, U and K within the Weardale Granite for the Eastgate Borehole (this study) and the Rookhope Borehole (Brown et al. 1987). Variation with depth of calculated heat production for the Weardale Granite from the Eastgate and Rookhope boreholes. Overall, the geochemical data from the granite yield the following results (Table 3). Summary of thermal properties for granite from the Eastgate Borehole (1) The heat production value for unaltered granite is estimated to be 4.1μW m−3, excluding samples at borehole depths of 400 m and shallower, and excluding the 740 and 950 m samples, which contain quartz vein material. This value exceeds that reported by Downing & Gray (1986) for the Weardale Granite (3.7 μW m−3). (2) Heat production appears to increase with depth, overall, with local perturbations related to the occurrence of veins of quartz and other discontinuities that are unlikely to affect the bulk heat production capacity. No attempt has been made to measure the thermal conductivity of the granite from the Eastgate Borehole. Heat flow has been estimated on a preliminary basis with the following assumptions. For the entire vertical interval, a mean value for thermal conductivity of 2.99 W m−1 K−1 has been estimated using values from Downing & Gray (1986) for different rock types (Weardale granite from the Rookhope core and Carboniferous lithologies), weighted according to thickness. Heat production within the granite has been neglected, and a ground surface temperature of 8 °C has been used. With these assumptions, the heat flow at Eastgate is estimated to be 115 mW m−2, well in excess of values reported by Downing and Gray (1986) for Rookhope. Water strikes and hydrogeological conditions Significant water strikes were encountered during drilling, with unusually high rates of groundwater ingress from the Carboniferous sequence (requiring a tri-cone roller bit instead of using hammer drilling). At times, the water yield from the Carboniferous strata of this one borehole exceeded the entire former dewatering rates of a number of local mines. These high water yields reflect the high permeability of fractures associated with the Slitt Vein structure. Installation of casing isolated shallow-sourced groundwaters from the borehole. The first casing sealed the hole off from water associated with limestones above the Whin Sill, on the assumption that the Sill itself is rarely a prolific aquifer. However, two major water feeders were encountered within the Whin Sill, bringing the borehole water yield back to levels found in the overlying sedimentary strata (c. 60 m3 h−1). Once the borehole had penetrated 130 m into the granite, the installation of the second casing was intended to eliminate all shallow feeders. Given the generally low permeability of granite, subsequent feeders would be expected to be minimal unless there was unusually intense fracturing. As expected, after the second casing had been grouted into the borehole, water yield dropped to zero, and this continued for a further 7 m. However, at around 410 m below ground, the drill stem pressure gauge jumped to 23 bar, and the drill bit suddenly dropped by 0.5 m. At this point the pressure gauge went off-scale (>30 bar), and water surged into the hole, rapidly rising to within 10 m of the ground surface. It is clear that a major open fissure had been encountered at this point. The electrical conductivity of this water greatly exceeded that of the waters previously encountered in the Carboniferous, and it was also warm to the touch (around 26 °C). Air-lifting at rates of up to 60 m3 h−1 (maximum capacity of the equipment) failed to lower the water level by more than 1 m, indicating a transmissivity in excess of 2000 darcy m. This is believed to be an unequalled value for granites worldwide (E. Sudicky, pers. comm.), although it clearly reflects the influence of the Slitt Vein, rather than the permeability of more typical extensional joints and faults typical of most plutons. Other fractures were intersected at depths of 436, 464, 492–496, 654, 720–721, 739.5 and 813–814.5 m. These fracture intersections were not accompanied by dramatic events such as occurred at 410 m, and the quantities of water that they introduced to the borehole were difficult to discern given that the 410 m feeder had already exceeded the air-lift capacity of the rig. A gradual increase in the temperature of the water arriving at the well head (>27 °C towards the end of drilling) indicated that a significant amount of warmer deeper water was mixing with the 410 m feeder water. After the end of drilling geophysical logs for fluid temperature, conductivity and flow rate (by impeller) were run twice: first through the static water column, and then with a 100 mm electric submersible pump stimulating the borehole water column by pumping from just below the water surface at a rate of around 1.4 m3 h−1. Comparison of the two suites of logs indicates significant water feeders associated with fractures at c. 730 and 756 m depth. (Other feeders were also indicated by the geophysical logs at 420, 434, 447, 485, 497, 527, 540, 557, 670 and 686 m.) Although none of these were as prolific as the 410 m feeder, they demonstrate the occurrence of permeable fractures at depths approaching those that could be considered for a long-term production borehole. Groundwater quality The electrical conductivity and temperature of water from the borehole were measured continuously at the cuttings separator. Water samples were taken initially every 10–20 m, and every 50 m for depths >300 m. Cations were determined using inductively coupled plasma atomic emission spectroscopy, alkalinity by titration, ammonium inductively coupled plasma atomic emission Kjeldahl digestion and anions by ion chromatography. Water samples up to and including 86.5 m were acidified before filtration; this meant that dissolved suspended solids were reported in the chemical analysis, giving misleading results that overestimated dissolved cations (high positive charge balances). From 135 to 995 m samples were filtered prior to acidification, with charge balances predominantly less than ±5%. Analyses are given in full in Table 4. Water compositional data (in mg l−1) for samples taken from the Eastgate Borehole (135 m and deeper) With depth, conductivity and temperature rise until c. 400 m (Fig. 8), at which point there is a sharp increase in both. This corresponds to the point at which the hole intersected the large open fracture at 410 m. Individual chemical species show contrasting behaviour with depth. The major cations (Na, K, Mg, Ca), lithium, strontium and chloride increase in the same way as conductivity, with a sudden increase once the 410 m feeder had been intercepted (Fig. 9). Observed pH decreases, from 7.6–8.1 to 5.8–6.0, probably reflecting the absence of bicarbonate buffering at depth: alkalinity (expressed as mg l−1 CaCO3) decreases with depth to very low values, as does dissolved sulphate. Variation of (a) water temperature and (b) electrical conductivity with depth in samples taken for analysis. Variations with depth in (a) pH, (b) alkalinity (mg l CaCO3) and sulphate, (c) calcium, sodium and chloride and (d) lithium and strontium. The constancy of the chemical composition of the water below 400 m reflects a combination of two possible causes: the water column within the hole is well mixed as a consequence of the drilling operation and/or the contribution of water from the fissure at 410 m is sufficiently strong to dominate water chemistry during drilling. There is no evidence from the water chemistry of any substantial flow of water that is more saline (i.e. deeper sourced) or less saline (i.e. derived from nearer the surface) into the hole at depths greater than 410 m. The deep brine encountered by the borehole has as dominant solutes 27 700 mg l−1 chloride, 10 030 mg l−1 sodium and 5320 mg l−1 calcium. Comparing its composition with water recovered from Cambokeels mine (Manning & Strutt 1990; Table 5), there is little doubt that the Eastgate Borehole has intersected the same water system (over 15 years after the brine was sampled in the mine) where it is deeper, more saline and warmer. Water compositions (in mg l−1) for samples taken from the Eastgate Borehole (mean of 10 samples below 400 m), compared with analyses reported for water from Cambokeels Mine (Manning & Strutt 1990), from the Southampton geothermal project (Downing & Gray 1986), and for seawater Importantly, the Eastgate Borehole water is about 25% more saline than seawater, but less than half (36%) the salinity of the water currently produced at the Southampton geothermal plant. It is more than twice the salinity of thermal waters reported from South Crofty Mine (Edmunds et al. 1984). As in South Crofty, molar Cl/Br ratios for the Eastgate water are below that typical of seawater (650), which could be attributed to loss of Cl via precipitation of halite through evaporation; in contrast, the high value for Southampton suggests halite dissolution. Molar Na/Li ratios are very low for the Eastgate and South Crofty waters, indicating substantial interaction with basement rocks, and their molar Na/Br ratios are again consistent with water–rock interaction (whereas the Southampton water's high Na/Li and Na/Br ratios are consistent with halite dissolution). The Eastgate water is close in composition to that identified by Cann & Banks (2001) as being associated with fluorite zone mineralization. Although unlikely to be a mineralizing fluid in its own right (although silica minerals are likely to have precipitated) the Eastgate water seems to have shared a common fluid–rock interaction history with the waters responsible for fluorite mineralization. Psyrillos et al. (2003) discussed the role of water from adjacent basins as part of the kaolinization process in the SW England granites. As the South Crofty and Eastgate waters are so similar in their major solute proportions, it is likely that analogous processes led to their formation, and the Eastgate waters may be derived ultimately from adjacent deep aquifers. Using the chemical data given in Table 5, temperatures at which the water may have equilibrated can be estimated (Truesdell 1984; Table 6). The quartz geothermometer gives a temperature of 38 °C (less than the observed bottom-hole temperature of 46 °C), whereas the alkali geothermometers give temperatures between 146 and 191 °C, similar to those calculated by Manning & Strutt (1990). It is likely that the water has lost silica by precipitation of quartz close to the surface and possibly in the recent geological past (quartz precipitation is abundant within fractures in the Slitt Vein). The alkali geothermometers suggest that the water achieved equilibrium with respect to Na, K and Ca at depths of 3–4 km (assuming a geothermal gradient of 40 °C km−1). This suggests that the water in the borehole forms part of a deep circulation system, which appears still to be active given the similarity of the water to that encountered at Cambokeels between 15 and 20 years ago. Temperatures derived from borehole water compositions using chemical geothermometers (Truesdell 1984) Geothermal potential Geophysical logging of the settled water column in the borehole, 3 days after the end of drilling, indicated the magnitude of the geothermal resource potentially developable in this vicinity. The bottom-hole temperature at 995 m was 46.2 °C, which yields a mean geothermal gradient estimate for this borehole of 38 °C km−1. This compares very favourably with the UK average (c. 21 °C km−1; Downing & Gray 1986), from which a bottom-hole temperature of only 30–35 °C would be predicted at this depth. As the geothermal gradient is likely to continue on the same linear trend as logged from 411 m to 995 m, the implication of the measured bottom-hole temperature at 995 m is that a borehole sunk to a typical 'production' depth of about 1800 m would be expected to return a bottom-hole temperature in the range 75–80 °C. Given that the heat production capacity of the Weardale Granite at Eastgate is similar to that previously calculated by Downing & Gray (1986), the key issues relate to the availability of natural groundwater to act as a transmission fluid for heat produced in the granite at various distances from the borehole. The extraordinary transmissivity of the major fracture at around 410 m depth provides unequivocal evidence of the association of highly permeable fractures with the Slitt Vein structure. Although the deeper fractures were not quite so permeable, geophysical logging indicates that they are still significant water-bearing structures. The occurrence of permeable fractures associated with the Slitt Vein at depth means that there is no need to artificially introduce water to the geological environment, in contrast to the usual assumption that similar UK granites are (at best) HDR prospects (Downing & Gray 1986; Barker et al. 2000). Instead of an HDR prospect, therefore, the Eastgate resource may be classified as a 'low-enthalpy hydrogeothermal resource hosted in vein-bearing granites'. This is the first time that this category of geothermal resource has been described (see Downing & Gray 1986; R. A. Downing, pers. comm.). Similar veins occur within the nearby (also radiothermal) granites of the Lake District and Wensleydale. Elsewhere, the Lecht Mine has a similar structure in the radiothermal granite of the Eastern Highlands, and the SW England Batholith hosts many analogous lodes, some known to have transmitted brines similar to that found at Eastgate (Edmunds et al. 1984). It may be that further exploration in geological settings similar to that at Eastgate would lead to the identification of other geothermal resources of significant magnitude. Table 7 compares aspects of the Eastgate Exploration Borehole with the existing Southampton Borehole. Given that the amount of water that may be produced from Eastgate is very similar to that yielded by the Southampton Borehole, the latter is an appropriate model for possible future development at Eastgate. The Eastgate water has the advantage that it is much less saline than the Southampton water (44 500 mg l−1 total dissolved solids (TDS) compared with 124 440 mg l−1 TDS at Southampton). Comparison of aspects of the Eastgate Exploration Borehole with the existing production facility at Southampton Given the findings from the Eastgate exploration borehole, it is reasonable to predict that a further borehole sunk to 1800 m could provide a resource similar in magnitude to that at Southampton. However, because the Southampton borehole was drilled to intersect a more or less horizontal aquifer unit, a production borehole at Eastgate would need to intersect fractures associated with the Slitt Vein at the target depth. From the experience of drilling this exploration borehole, it is unlikely that a depth of 1800 m could be reached without intersecting a number of shallower fractures carrying cooler water. Further casing close to the total depth would therefore be needed, so that shallower feeders could be sealed off before drilling on in pursuit of sufficient large fractures at depth to yield a viable low-enthalpy resource. Alternatively, a new production borehole could be deliberately drilled off-centre from the target vein structure, with directional drilling techniques being used to deviate the lowermost parts of the borehole until they contact the permeable fractures along the vein azimuth. Implementation of either of these options would be both the most significant engineering challenge and the most significant risk element in proceeding to full-scale geothermal energy production at Eastgate. We acknowledge funding by the Weardale Task Force, made up of One NorthEast, Durham County Council, Wear Valley District Council and Lafarge Cement. We are grateful to Rob Westaway for his help in reviewing this paper. This project received funding from One NorthEast through the County Durham Sub Regional Partnership. The project was part-financed by the European Union, European Regional Development Fund. © 2007 The Geological Society of London Bachler, D. & Kohl, T. 2005. Coupled thermal–hydraulic–chemical modelling of enhanced geothermal systems. Geophysical Journal International, 161, 533–548. Banks, D., Skarphagen, H., Wiltshire, R. & Jessop, C. 2004. Heat pumps as a tool for energy recovery from mining wastes. In: Gieré, R. & Stille, P. (eds) Energy, Waste, and the Environment: a Geochemical Perspective. Geological Society, London, Special Publications, 236, 499–513. Barker, J.A., Downing, R.A., Gray, D.A., Findlay, J., Kellaway, G.A., Parker, R.H. & Rollin, K.E. 2000. Hydrogeothermal studies in the United Kingdom. Quarterly Journal of Engineering Geology and Hydrogeology, 33, 41–58. Bott, M.H.P., Masson-Smith, D. 1953. Gravity measurements over the northern Pennines. Geological Magazine, 90, 127–130. Bott, M.H.P., Masson-Smith, D. 1957. The geological interpretation of a gravity survey of the Alston Block and the Durham coalfield. Quarterly Journal of the Geological Society of London, 113, 93–117. Brown, G.C., Ixer, R.A., Plant, J.A. & Webb, P.C. 1987. Geochemistry of granites beneath the north Pennines and their role in orefield mineralization. Transactions of the Institution of Mining and Metallurgy, 96, B65–B76. Caneta Research 1999. Global warming impacts of ground source heat pumps compared to other heating/cooling systems. Report to the Renewable and Electrical Energy Division, Natural Resources Canada (Ottawa). Caneta Research, Mississauga. Cann, J.R. & Banks, D.A. 2001. Constraints on the genesis of the mineralization of the Alston Block, Northern Pennine Orefield, northern England. Proceedings of the Yorkshire Geological Society, 53, 187–196. Creaney, S.D., 1980. Petrographic texture and vitrinite reflectance variation on the Alston Block, north-east England. Proceedings of the Yorkshire Geological Society, 42, 553–580. Dickson, M.H. & Fanelli, M. 2005. Geothermal Energy: Utilization and Technology. Earthscan, London. DiPippo, R., 2005. Geothermal Power Plants. Principles, Applications and Case Studies. Elsevier, Oxford. Downing, R.A. & Gray, D.A. (eds) 1986. Geothermal Energy—the Potential in the United Kingdom. HMSO, London. Dunham, K.C., 1987. Discussion of G. C. Brown et al. Geochemistry of granites beneath the north Pennines and their role in orefield mineralization. Transactions of the Institution of Mining and Metallurgy, 96, B65–B76. Transactions of the Institution of Mining and Metallurgy, 96, B229–B230. Dunham, K.C., 1990. Geology of the Northern Pennine Orefield. Volume 1—Tyne to Stainmore. Economic Memoir of the British Geological Survey, HMSO, London. Dunham, K.C., Dunham, A.C., Hodge, B.L. & Johnson, G.A.L. 1965. Granite beneath Viséan sediments with mineralization at Rookhope, northern Pennines. Quarterly Journal of the Geological Society of London, 121, 383–417. Edmunds, W.M., Andrews, J.N., Burgess, W.G., Kay, R.L.F. & Lee, D.J. 1984. The evolution of saline and thermal groundwaters in the Carnmenellis granite. Mineralogical Magazine, 48, 407–424. Fitch, F.J. & Miller, J.A. 1967. The age of the Whin Sill. Geological Journal, 5, 233–250. Fraser, A.J. & Gawthorpe, R.L. 2003. An Atlas of Carboniferous Basin Evolution in Northern England. Geological Society, London, Memoirs, 28. Goulty, N.R., 2005. Emplacement mechanism of the Great Whin and Midland Valley dolerite sills. Journal of the Geological Society, London, 162, 1047–1056. Karweil, J., 1955. Die Metamorphose der Kohlen vom Standpunkt der physikalischen Chemie. Zeitschrift der Deutschen Geologischen Gesellschaft, 107, 132–139. Kolditz, O. & Clauser, C. 1998. Numerical simulation of flow and heat transfer in fractured crystalline rocks: application to the Hot Dry Rock site in Rosemanowes (UK). Geothermics, 27, 1–23. Lund, J.W., Lienau, P.J. & Lunis, B.C. (eds) 1998. Geothermal direct-use engineering and design handbook. Geo-Heat Center, Oregon Institute of Technology, Klamath Falls, OR. Manning, D.A.C. & Strutt, D.W. 1990. Metallogenetic significance of a North Pennine springwater. Mineralogical Magazine, 54, 629–636. Psyrillos, A., Burley, S.D., Manning, D.A.C. & Fallick, A.E. 2003. Coupled mineral–fluid evolution of a basin and high: kaolinization in the SW England granites in relation to the development of the Plymouth Basin. In: Petford, N. & McCaffrey, K.J.W. (eds) Hydrocarbons in Crystalline Rocks. Geological Society, London, Special Publications, 214, 175–195. Richards, H.G., Parker, R.H., Green, A.S.P. , et al., 1994. The performance and characteristics of the experimental hot dry rock geothermal reservoir at Rosemanowes, Cornwall (1985–1988). Geothermics, 23, 73–109. Sanner, B., Karytsas, C., Mendrinos, D. & Rybach, L. 2003. Current status of ground source heat pumps and underground thermal energy storage in Europe. Geothermics, 32, 579–588. Truesdell, A.H., 1984. Chemical geothermometers for geothermal exploration. In: Henley, R.W., Truesdell, A.H. & Barton, P.B. (eds) Fluid–Mineral Equilibria in Hydrothermal Systems. Society of Economic Geologists, Reviews in Economic Geology, 1, 31–44. Webb, P.C., Lee, M.K. & Brown, G.C. 1987. Heat flow–heat production relationships in the UK and the vertical distribution of heat production in granite batholiths. Geophysical Research Letters, 14, 279–282. OpenUrlWeb of Science Webb, P.C., Tindle, A.G., Barritt, S.D., Brown, G.C. & Miller, J.F. 1985. Radiothermal granites of the United Kingdom: comparison of fractionation patterns and variation of heat produced for selected granites. High Heat Production (HHP) Granites, Hydrothermal Circulation and Ore Genesis. . Institution of Mining and Metallurgy, 203–213. Younger, P.L., 2000. Nature and practical implications of heterogeneities in the geochemistry of zinc-rich, alkaline mine waters in an underground F–Pb mine in the UK. Applied Geochemistry, 15, 1383–1397. Younger, P.L. & Adams, R. 1999. Predicting mine water rebound. Environment Agency R&D Technical Report, W179. Thank you for sharing this Journal of the Geological Society article. You are going to email the following A deep geothermal exploration well at Eastgate, Weardale, UK: a novel exploration concept for low-enthalpy resources Message Subject (Your Name) has forwarded a page to you from Journal of the Geological Society Message Body (Your Name) thought you would be interested in this article in Journal of the Geological Society. Download PPT Characterizing the Mesozoic extension direction in the northern Iberian plate margin by anisotropy of magnetic susceptibility (AMS) Vertical ductile thinning and its contribution to the exhumation of high-pressure rocks: the Cycladic blueschist unit in the Aegean The mechanical relationship between strike-slip faults and salt diapirs in the Zagros fold–thrust belt Show more: Original Article The largest arthropod in Earth history: insights from newly discovered Arthropleura remains (Serpukhovian Stainmore Formation, Northumberland, England) Meteorites that produce K-feldspar-rich ejecta blankets correspond to mass extinctions False biosignatures on Mars: anticipating ambiguity Geological Society of London Scientific Statement: what the geological record tells us about our present and future climate A template for an improved rock-based subdivision of the pre-Cryogenian timescale Alerts & RSS Activate Online Subscription Lyell Collection About the Lyell Collection Lyell Collection homepage Open Access Collection Lyell Collection access help Geofacets The Geological Society Awards, Grants & Bursaries Geoscientist Online Policy & Media Society blog Published by The Geological Society of London, registered charity number 210161 Copyright © 2022 Geological Society of London
CommonCrawl
From formulasearchengine {{#invoke:Hatnote|hatnote}} {{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} A laboratory tabletop centrifuge. The rotating unit, called the rotor, has fixed holes drilled at an angle (to the vertical), visible inside the smooth silver rim. Test tubes are placed in these slots and the motor is spun. As the centrifugal force is in the horizontal plane and the tubes are fixed at an angle, the particles have to travel only a little distance before they hit the wall and drop down to the bottom. These angle rotors are very popular in the lab for routine use. A centrifuge is a piece of equipment, generally driven by an electric motor (or, in some older models, by hand), that puts an object in rotation around a fixed axis (spins it in a circle), applying a potentially strong force perpendicular to the axis (outward). Large centrifuges can be used to simulate high gravity or acceleration environments (for example, high-G training for test pilots). Medium-sized centrifuges are used in washing machines and at some swimming pools to wring water out of fabrics. Many centrifuges are used as laboratory or industrial equipment to separate materials, for example small molecules from large molecules. The centrifuge works using the sedimentation principle, where the centripetal acceleration causes denser substances to separate out along the radial direction (the bottom of the tube). By the same token objects that are less dense will tend to move to the top (of the tube; in the rotating picture, move to the centre). Most materials-separating centrifuges use liquids or a mixture of solids and liquids, but gas centrifuges are used for isotope separation, such as to enrich nuclear fuel. 1 History and predecessors 3 Uses 3.1 Isolating suspensions 3.2 Isotope separation 3.3 Aeronautics and astronautics 3.4 Geotechnical centrifuge modeling 3.5 Commercial applications 4 Mathematical description 5 References and notes History and predecessors Early 20th-century advertising poster for a milk separator. English military engineer Benjamin Robins (1707–1751) invented a whirling arm apparatus to determine drag. In 1864, Antonin Prandtl invented the first dairy centrifuge in order to separate cream from milk. In first continuous centrifugal separator, making its commercial application feasible. There are multiple types of centrifuge, which can be classified by intended use or by rotor design: Types by rotor design: [1][2][3][4] Fixed-angle centrifuges are designed to hold the sample containers at a constant angle relative to the central axis. Swinging head (or swinging bucket) centrifuges, in contrast to fixed-angle centrifuges, have a hinge where the sample containers are attached to the central rotor. This allows all of the samples to swing outwards as the centrifuge is spun. Continuous tubular centrifuges do not have individual sample vessels and are used for high volume applications. Types by intended use: Ultracentrifuges are optimized for spinning a rotor at very high speeds and are popular in the fields of molecular biology, biochemistry and polymer science. This type may include preparative or analytical, fixed-angle or swing head varieties.[3] Haematocrit centrifuges are used to measure the percentage of red blood cells in whole blood. Gas centrifuges, including Zippe-type centrifuges Industrial centrifuges may otherwise be classified according to the type of separation of the high density fraction from the low density one: Screen centrifuges, where the centrifugal acceleration allows the liquid to pass through a screen of some sort, through which the solids cannot go (due to granulometry larger than the screen gap or due to agglomeration). Common types are: Screen/scroll centrifuges Pusher centrifuges Peeler centrifuges Decanter centrifuges, in which there is no physical separation between the solid and liquid phase, rather an accelerated settling due to centrifugal acceleration. Continuous liquid; common types are: Solid bowl centrifuges Conical plate centrifuges Isolating suspensions {{#invoke:main|main}} Simple centrifuges are used in chemistry, biology, and biochemistry for isolating and separating suspensions. They vary widely in speed and capacity. They usually comprise a rotor containing two, four, six, or many more numbered wells within which the samples, contained in centrifuge tubes, may be placed. Isotope separation {{#invoke:main|main}} Other centrifuges, the first being the Zippe-type centrifuge, separate isotopes, and these kinds of centrifuges are in use in nuclear power and nuclear weapon programs. Gas centrifuges are used in uranium enrichment. The heavier isotope of uranium (uranium-238) in the uranium hexafluoride gas tends to concentrate at the walls of the centrifuge as it spins, while the desired uranium-235 isotope is extracted and concentrated with a scoop selectively placed inside the centrifuge.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} It takes many thousands of centrifugations to enrich uranium enough for use in a nuclear reactor (around 3.5% enrichment),{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} and many thousands more to enrich it to weapons-grade (above 90% enrichment) for use in nuclear weapons.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} {{#invoke:main|main}} The 20 G centrifuge at the NASA Ames Research Center Human centrifuges are exceptionally large centrifuges that test the reactions and tolerance of pilots and astronauts to acceleration above those experienced in the Earth's gravity. The US Air Force at Holloman Air Force Base, New Mexico operates a human centrifuge. The centrifuge at Holloman AFB is operated by the aerospace physiology department for the purpose of training and evaluating prospective fighter pilots for high-g flight in Air Force fighter aircraft.[5] The use of large centrifuges to simulate a feeling of gravity has been proposed for future long-duration space missions. Exposure to this simulated gravity would prevent or reduce the bone decalcification and muscle atrophy that affect individuals exposed to long periods of freefall. [5] [6] The first centrifuges used for human research were used by Erasmus Darwin, the grandfather of Charles Darwin. The first largescale human centrifuge designed for Aeronautical training was created in Germany in 1933.[7] Geotechnical centrifuge modeling Geotechnical centrifuge modeling is used for physical testing of models involving soils. Centrifuge acceleration is applied to scale models to scale the gravitational acceleration and enable prototype scale stresses to be obtained in scale models. Problems such as building and bridge foundations, earth dams, tunnels, and slope stability, including effects such as blast loading and earthquake shaking.[8] Sugar centrifugal machines, to separating sugar crystals from the crystallized syrup, or mother liquor. Centrifuges with a batch weight of up to 2,200 kg per charge are used in the sugar industry to separate the sugar crystals from the mother liquor.[9] Standalone centrifuges for drying (hand-washed) clothes – usually with a water outlet. Centrifuges are used in the attraction Mission: SPACE, located at Epcot in Walt Disney World, which propels riders using a combination of a centrifuge and a motion simulator to simulate the feeling of going into space. In soil mechanics, centrifuges utilize centrifugal acceleration to match soil stresses in a scale model to those found in reality. Large industrial centrifuges are commonly used in water and wastewater treatment to dry sludges. The resulting dry product is often termed cake, and the water leaving a centrifuge after most of the solids have been removed is called centrate. Large industrial centrifuges are also used in the oil industry to remove solids from the drilling fluid. Disc-stack centrifuges used by some companies in the oil sands industry to separate small amounts of water and solids from bitumen Centrifuges are used to separate cream (remove fat) from milk; see Separator (milk). Mathematical description Protocols for centrifugation typically specify the amount of acceleration to be applied to the sample, rather than specifying a rotational speed such as revolutions per minute. This distinction is important because two rotors with different diameters running at the same rotational speed will subject samples to different accelerations. During circular motion the acceleration is the product of the radius and the square of the angular velocity ω{\displaystyle \omega } , and the acceleration relative to "g" is traditionally named "relative centrifugal force" (RCF). The acceleration is measured in multiples of "g" (or × "g"), the standard acceleration due to gravity at the Earth's surface, a dimensionless quantity given by the expression: A 19th-century hand cranked laboratory centrifuge. RCF = r ω 2 g {\displaystyle {\text{RCF}}={\frac {r\omega ^{2}}{g}}} g {\displaystyle \textstyle g} is earth's gravitational acceleration, r {\displaystyle \textstyle r} is the rotational radius, ω{\displaystyle \omega } is the angular velocity in radians per unit time This relationship may be written as RCF = 1.11824396 × 10 − 6 r mm N RPM 2 {\displaystyle {\text{RCF}}=1.11824396\,\times 10^{-6}\,r_{\text{mm}}\,N_{\text{RPM}}^{2}} r mm {\displaystyle \textstyle r_{\text{mm}}} is the rotational radius measured in millimeters (mm), and N RPM {\displaystyle \textstyle N_{\text{RPM}}} is rotational speed measured in revolutions per minute (RPM). To avoid having to perform a mathematical calculation every time, one can find nomograms for converting RCF to rpm for a rotor of a given radius. A ruler or other straight edge lined up with the radius on one scale, and the desired RCF on another scale, will point at the correct rpm on the third scale.[10] Based on automatic rotor recognition, modern centrifuges have a button for automatic conversion from RCF to rpm and vice versa. ↑ Template:Cite web ↑ "Plasmid DNA Separation: Fixed-Angle and Vertical Rotors in the Thermo Scientific Sorvall Discovery™ M120 & M150 Microultracentrifuges" (Thermo Fischer publication) ↑ 3.0 3.1 http://uqu.edu.sa/files2/tiny_mce/plugins/filemanager/files/4250119/lectures/1._instr.pdf ↑ 5.0 5.1 Template:Cite web ↑ http://www.dtic.mil/dtic/tr/fulltext/u2/a236267.pdf ↑ {{#invoke:citation/CS1|citation |CitationClass=book }} ↑ article on centrifugal controls, retrieved on June 5, 2010 ↑ Nomogram example Naesgaard et al., Modeling flow liquefaction, its mitigation, and comparison with centrifuge tests Lamm equation Sedimentation coefficient Clearing factor Hydroextractor Honey extractor Separation process (includes list of techniques) Template:Sister Template:Sister RCF Calculator and Nomograph Centrifugation Rotor Calculator Selection of historical centrifuges in the Virtual Laboratory of the Max Planck Institute for the History of Science Retrieved from "https://en.formulasearchengine.com/index.php?title=Centrifuge&oldid=224247" About formulasearchengine
CommonCrawl
Operator algebras: subfactors and their applications Structure of operator algebras: subfactors and fusion categories Timetable (OASW01) Monday 23rd January 2017 to Friday 27th January 2017 09:00 to 09:50 Registration 09:50 to 10:00 Welcome from David Abrahams (INI Director) 10:00 to 11:00 Sorin Popa (University of California, Los Angeles) On rigidity in II1 factor framework II1 factors appear naturally from a multitude of data (groups, group actions, operations such as free products, etc). This leads to two types of rigidity phenomena in this framework: 1. W*-rigidity, aiming at recovering the building data from the isomorphism class of the algebra. 2. Restrictions on the symmetries of the II1 factor (like the index of its subfactors). We will discuss some old and new results in this direction, and the role of deformation-rigidity techniques in obtaining them. INI 1 11:00 to 11:30 Morning Coffee 11:30 to 12:30 Stephen Bigelow (University of California, Santa Barbara) A diagrammatic approach to Ocneanu cells Kuperberg's SU(3) spider has "web" diagrams with oriented strands and trivalent vertices. A closed web evaluates to a real number, which can be thought of as a weighted sum of certain ways to "colour" the faces of the web. The weighting here is defined using Ocneanu cells, which were explicitly calculated in a 2009 paper by Evans and Pugh. I will describe a diagrammatic way to recover their calculation in the simplest case of the A series. Each strand of a web becomes a parallel pair of coloured strands, and each vertex becomes three coloured strands that connect up the three incoming pairs of coloured strands. 12:30 to 13:30 Lunch @ Wolfson Court 13:30 to 14:30 Corey Jones (Australian National University) Operator Algebras in rigid C*-tensor categories In this talk, we will describe a theory of operator algebra objects in an arbitrary rigid C*-tensor category C. Letting C be the category of finite dimensional Hilbert spaces, we recover the ordinary theory of operator algebras. We will explain the philosophy and motivation for this framework, and how it provides a unified perspective on various aspects of the theories of rigid C*-tensor categories, quantum groups, and subfactors. This is based on joint work with Dave Penneys. 14:30 to 15:30 David Jordan (University of Edinburgh) Dualizability and orientability of tensor categories A topological field theory is an invariant of oriented manifolds, valued in some category C, with many pleasant properties. According to the cobordism hypothesis, a fully extended -- a.k.a. fully local -- TFT is uniquely determined by a single object of C, which we may think of as the invariant assigned by the theory to the point. This object must have strong finiteness properties, called dualizability, and strong symmetry properties, called orientability. In this talk I'd like to give an expository discussion of several recent works "in dimension 1,2, and 3" -- of Schommer-Pries, Douglas--Schommer-Pries--Snyder, Brandenburg-Chivrasitu-Johnson-Freyd, Calaque-Scheimbauer -- which unwind the abstract notions of dualizability and orientability into notions very familiar to the assembled audience: things like Frobenius algebras, fusion categories, pivotal fusion categories, modular tensor categories. Finally in this context, I'll discuss some work in progress with Adrien Brochier and Noah Snyder, which finds a home on these shelves for arbitrary tensor and pivotal tensor categories (no longer finite, or semi-simple), and for braided and ribbon braided tensor categories. 15:30 to 16:00 Afternoon Tea 16:00 to 17:00 Yusuke Isono (Kyoto University) On fundamental groups of tensor product II_1 factors We study a stronger notion of primeness for II_1 factors, which was introduced in my previous work. Using this, we prove that if G and H are groups which are realized as fundamental groups of II_1 factors, then so are groups GH and G \cap H. 17:00 to 18:00 Welcome Wine Reception at INI 10:00 to 11:00 George Elliott (University of Toronto); (Cardiff University); (University of Copenhagen) The classification of unital simple separable C*-algebras with finite nuclear dimension As, perhaps, a climax to forty years of work by many people, the class of algebras in the title (assumed also to satisfy the UCT, which holds in all concrete examples and may be automatic) can now be classified by means of elementary invariants (the K-groups and tracial simplex). 11:30 to 12:30 Stuart White (University of Glasgow) The structure of simple nuclear C*-algebras: a von Neumann prospective I'll discuss aspects of structure of simple nuclear C*-algebras ( in particular the Toms-Winter regularity conjecture) drawing parallels with results for injective von Neumann algebras. 13:30 to 14:30 Wilhelm Winter (Universität Münster) Structure and classification of nuclear C*-algebras: The role of the UCT The question whether all separable nuclear C*-algebras satisfy the Universal Coefficient Theorem remains one of the most important open problems in the structure and classification theory of such algebras. It also plays an integral part in the connection between amenability and quasidiagonality. I will discuss several ways of looking at the UCT problem, and phrase a number of intermediate questions. This involves the existence of Cartan MASAS on the one hand, and certain kinds of embedding problems for strongly self-absorbing C*-algebras on the other. 14:30 to 15:30 Sam Evington (University of Glasgow) W$^*$-Bundles and Continuous Families of Subfactors W$^*$-bundles were first introduced by Ozawa, motivated by work on the Toms-Winter Conjecture and, more generally, the classification of C$^*$-algebras. I will begin with a brief introduction to W$^*$-bundles, explaining how they combine the measure theoretic nature of tracial von Neumann algebras with the topological nature of C$^*$-algebras. I will then discuss the relationship between the triviality problem for W$^*$-bundles and the Toms-Winter Conjecture. Finally, I will present my work with Ulrich Pennig on locally trivial W$^*$-bundles and my ongoing work on expected subbundles of W$^*$-bundles inspired by subfactor theory. 16:00 to 17:00 Koichi Shimada (Kyoto University) A classification of real-line group actions with faithful Connes--Takesaki modules on hyperfinite factors We classify certain real-line-group actions on (type III) hyperfinite factoers, up to cocycle conjugacy. More precisely, we show that an invariant called the Connes--Takesaki module completely distinguishs actions which are not approximately inner at any non-trivial point. Our classification result is related to the uniqueness of the hyperfinite type III_1 factor, shown by Haagerup, which is equivalent to the uniquness of real-line-group actions with a certain condition on the hyperfinite type II_{\infty} factor. We classify actions on hyperfinite type III factors with an analogous condition. The proof is based on Masuda--Tomatsu's recent work on real-line-group actions and the uniqueness of the hyperfinite type III_1 factor. 10:00 to 11:00 Stefaan Vaes (KU Leuven) Classification of free Araki-Woods factors Co-authors: Cyril Houdayer (Université Paris Sud) and Dimitri Shlyakhtenko (UCLA). Free Araki-Woods factors are a free probability analog of the type III hyperfinite factors. They were introduced by Shlyakhtenko in 1996, who completely classified the free Araki-Woods factors associated with almost periodic orthogonal representations of the real numbers. I present a joint work with Houdayer and Shlyakhtenko in which we completely classify a large class of non almost periodic free Araki-Woods factors. The key technical result is a deformation/rigidity criterion for the unitary conjugacy of two faithful normal states on a von Neumann algebra. 11:30 to 12:30 Dima Shlyakhtenko (University of California, Los Angeles) Cohomology and $L^2$-Betti numbers for subfactors and quasi-regular inclusions Co-authors: Sorin Popa (UCLA) and Stefaan Vaes (Leuven) We introduce L$^2$-Betti numbers, as well as a general homology and cohomology theory for the standard invariants of subfactors, through the associated quasi-regular symmetric enveloping inclusion of II$_1$ factors. We actually develop a (co)homology theory for arbitrary quasi-regular inclusions of von Neumann algebras. For crossed products by countable groups Γ, we recover the ordinary (co)homology of Γ. For Cartan subalgebras, we recover Gaboriau's L$^2$-Betti numbers for the associated equivalence relation. In this common framework, we prove that the L$^2$-Betti numbers vanish for amenable inclusions and we give cohomological characterizations of property (T), the Haagerup property and amenability. We compute the L$^2$-Betti numbers for the standard invariants of the Temperley-Lieb-Jones subfactors and of the Fuss-Catalan subfactors, as well as for free products and tensor products. 13:30 to 14:30 Arnaud Brothier (Università degli Studi di Roma Tor Vergata) Crossed-products by locally compact groups and intermediate subfactors. I will present examples of an action of a totally disconnected group G on a factor Q such that intermediate subfactors between Q and the crossed-product correspond to closed subgroups of G. This extends previous work of Choda and Izumi-Longo-Popa. I will discuss about the analytical difference with the case of actions of discrete groups regarding the existence of conditional expectations or operator valued weights. Finally I will talk about intermediate subfactors in the context of actions of Hecke pairs of groups. This is a joint work with Rémi Boutonnet. 14:30 to 15:30 Alexei Semikhatov (Lebedev Physical Institute) Screening operators in conformal field models and beyond INI 1 16:00 to 17:00 Alice Guionnet (ENS - Lyon) tba INI 1 19:30 to 22:00 Formal Dinner at Emmanuel College 10:00 to 11:00 Benjamin Doyon (King's College London) Conformal field theory out of equilibrium Non-equilibrium conformal field theory is the application of methods of conformal field theory to states that are far from equilibrium. I will describe exact results for current-carrying steady-states that occur in the partitioning protocol: two baths (half-lines) are independently thermalized at different temperatures, then joined together and let to evolve for a large time. Results include the exact energy current, the exact scattering map describing steady-state averages and correlations of all fields in the energy sector (the stress-energy tensor and its descendants), and the full scaled cumulant generating function describing the fluctuations of energy transport. I will also explain how, in space-time, the steady state occurs between contact discontinuities beyond which lie the asymptotic baths. If time permits, I will review how these results generalize to higher-dimensional conformal field theory, and to non-conformal integrable models. This is work in collaboration with Denis Bernard. 11:30 to 12:30 Alina Vdovina (Newcastle University) Buildings and C*-algebras We will give an elementary introduction to the theory of buildingsfrom a geometric point of view. Namely, we present buildings as universal coversof finite polyhedral complexes. It turns out that the combinatorial structure of these complexesgives rise to a large class of higher rank Cuntz-Krieger algebras, which K-theory can be explicitly computed. 13:30 to 14:30 Claus Kostler (University College Cork) An elementary approach to unitary representations of the Thompson group F I provide an elementary construction of unitary representations of the Thompson group F. Further I will motivate this new approach by recent results on distributional symmetries in noncommutative probability. My talk is based on joined work with Rajarama Bhat, Gwion Evans, Rolf Gohm and Stephen Wills. 14:30 to 15:30 Rolf Gohm (Aberystwyth University) Braids, Cosimplicial Identities, Spreadability, Subfactors Actions of a braid monoid give rise to cosimplicial identities. Cosimplicial identities for morphisms of (noncommutative) probability spaces lead to spreadable processes for which there is a (noncommutative) de Finetti type theorem. This scheme can be applied to braid group representations from subfactors. We discuss results and open problems of this approach. This is joint work with G. Evans and C. Koestler. 16:00 to 17:00 Alexei Davydov (Ohio University) Modular invariants for group-theoretical modular data Group-theoretical modular categories is a class of modular categories for which modular invariants can be described effectively (in group-theoretical terms). This description is useful for applications in conformal field theory, allowing classification of full CFTs with given chiral halves being holomorphic orbifolds. In condensed matter physics it can be used to classify possible boson condensations. It also provides ways of studying braided equivalences between group-theoretical modular categories. The class of modular categories can be used to provide examples of counter-intuitive behaviour of modular invariants: multiple physical realisations of a given modular invariant, non-physicality of some natural modular invariants. The talk will try to give an overview of known results and open problems. 10:00 to 11:00 Julia Plavnik (Texas A&M University) On gauging symmetry of modular categories Co-authors: Shawn X. Cui ( Stanford University), César Galindo (Universidad de los Andes), Zhenghan Wang (Microsoft Research, Station QUniversity of CaliforniaSanta Barbara) A very interesting class of fusion categories is the one formed by modular categories. These categories arise in a variety of mathematical subjects including topological quantum field theory, conformal field theory, representation theory of quantum groups, von Neumann algebras, and vertex operator algebras. In addition to the mathematical interest, a motivation for pursuing a classification of modular categories comes from their application in condensed matter physics and quantum computing. Gauging is a well-known theoretical tool to promote a global symmetry to a local gauge symmetry. In this talk, we will present a mathematical formulation of gauging in terms of higher category formalism. Roughly, given a unitary modular category (UMC) with a symmetry group G, gauging is a 2-step process: first extend the UMC to a G-crossed braided fusion category and then take the equivariantization of the resulting category. This is an useful tool to construct new modular categories from given ones. We will show through concrete examples which are the ingredients involved in this process. In addition, if time allows, we will mention some classification results and conjectures associated to the notion of gauging. 11:30 to 12:30 Pinhas Grossman Algebras, automorphisms, and extensions of quadratic fusion categories To a finite index subfactor there is a associated a tensor category along with a distinguished algebra object. If the subfactor has finite depth, this tensor category is a fusion category. The Brauer-Picard group of a fusion category, introduced by Etingof-Nikshych-Ostrik, is the (finite) group of Morita autoequivalences. It contains as a subgroup the outer automorphism group of the fusion category. In this talk we will decribe the Brauer-Picard groups of some quadratic fusion categories as groups of automorphisms which move around certain algebra objects. Combining this description with an operator algebraic construction, we can classify graded extensions of the Asaeda-Haagerup fusion categories. This is joint work with Masaki Izumi and Noah Snyder. 13:30 to 14:30 Noah Snyder (Indiana University) Trivalent Categories If N 14:30 to 15:30 Henry Tucker (University of California, San Diego) Eigenvalues of rotations and braids in spherical fusion categories Co-authors: Daniel Barter (University of Michigan), Corey Jones (Australian National University) Using the generalized categorical Frobenius-Schur indicators for semisimple spherical categories we have established formulas for the multiplicities of eigenvalues of generalized rotation operators. In particular, this implies for a finite depth planar algebra, the entire collection of rotation eigenvalues can be computed from the fusion rules and the traces of rotation at finitely many depths. If the category is also braided these formulas yield the multiplicities of eigenvalues for a large class of braids in the associated braid group representations. This provides the eigenvalue multiplicities for braids in terms of just the S and T matrices in the case where the category is modular. https://arxiv.org/abs/1611.00071 - arXiv:1611.00071 16:00 to 17:00 David Penneys (University of California, Los Angeles) Operator algebras in rigid C*-tensor categories, part II In this talk, we will first define a (concrete) rigid C*-tensor category. We will then highlight the main features that are important to keep in mind when passing to the abstract setting. I will repeat a fair amount of material on C*/W* algebra objects from Corey Jones' Monday talk. Today's goal will be to prove the Gelfand-Naimark theorem for C*-algebra objects in Vec(C). To do so, we will have to understand the analog of the W*-algebra B(H) as an algebra object in Vec(C). In the remaining time, we will elaborate on the motivation for the project from the lens of enriched quantum symmetries. This talk is based on joint work with Corey Jones (arXiv:1611.04620).
CommonCrawl
"It is important to note that Abilify MyCite's prescribing information (labeling) notes that the ability of the product to improve patient compliance with their treatment regimen has not been shown. Abilify MyCite should not be used to track drug ingestion in "real-time" or during an emergency because detection may be delayed or may not occur," the FDA said in a statement. Terms and Conditions: The content and products found at feedabrain.com, adventuresinbraininjury.com, the Adventures in Brain Injury Podcast, or provided by Cavin Balaster or others on the Feed a Brain team is intended for informational purposes only and is not provided by medical professionals. The information on this website has not been evaluated by the food & drug administration or any other medical body. We do not aim to diagnose, treat, cure or prevent any illness or disease. Information is shared for educational purposes only. Readers/listeners/viewers should not act upon any information provided on this website or affiliated websites without seeking advice from a licensed physician, especially if pregnant, nursing, taking medication, or suffering from a medical condition. This website is not intended to create a physician-patient relationship. Texas-based entrepreneur and podcaster Mansal Denton takes phenylpiracetam, a close relative of piracetam originally developed by the Soviet Union as a medication for cosmonauts, to help them endure the stresses of life in space. "I have a much easier time articulating certain things when I take it, so I typically do a lot of recording [of podcasts] on those days," he says. There is an ancient precedent to humans using natural compounds to elevate cognitive performance. Incan warriors in the 15th century would ingest coca leaves (the basis for cocaine) before battle. Ethiopian hunters in the 10th century developed coffee bean paste to improve hunting stamina. Modern athletes ubiquitously consume protein powders and hormones to enhance their training, recovery, and performance. The most widely consumed psychoactive compound today is caffeine. Millions of people use coffee and tea to be more alert and focused. But perhaps the biggest difference between Modafinil and other nootropics like Piracetam, according to Patel, is that Modafinil studies show more efficacy in young, healthy people, not just the elderly or those with cognitive deficits. That's why it's great for (and often prescribed to) military members who are on an intense tour, or for those who can't get enough sleep for physiological reasons. One study, by researchers at Imperial College London, and published in Annals of Surgery, even showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions. "Where can you draw the line between Red Bull, six cups of coffee and a prescription drug that keeps you more alert," says Michael Schrage of the MIT Center for Digital Business, who has studied the phenomenon. "You can't draw the line meaningfully - some organizations have cultures where it is expected that employees go the extra mile to finish an all-nighter. " Took pill #6 at 12:35 PM. Hard to be sure. I ultimately decided that it was Adderall because I didn't have as much trouble as I normally would in focusing on reading and then finishing my novel (Surface Detail) despite my family watching a movie, though I didn't notice any lack of appetite. Call this one 60-70% Adderall. I check the next evening and it was Adderall. I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn't, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I'm not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road. The reviews on this site are a demonstration of what someone who uses the advertised products may experience. Results and experience may vary from user to user. All recommendations on this site are based solely on opinion. These products are not for use by children under the age of 18 and women who are pregnant or nursing. If you are under the care of a physician, have a known medical condition or are taking prescription medication, seek medical advice from your health care provider before taking any new supplements. All product reviews and user testimonials on this page are for reference and educational purposes only. You must draw your own conclusions as to the efficacy of any nutrient. Consumer Advisor Online makes no guarantee or representations as to the quality of any of the products represented on this website. The information on this page, while accurate at the time of publishing, may be subject to change or alterations. All logos and trademarks used in this site are owned by the trademark holders and respective companies. Because executive functions tend to work in concert with one another, these three categories are somewhat overlapping. For example, tasks that require working memory also require a degree of cognitive control to prevent current stimuli from interfering with the contents of working memory, and tasks that require planning, fluency, and reasoning require working memory to hold the task goals in mind. The assignment of studies to sections was based on best fit, according to the aspects of executive function most heavily taxed by the task, rather than exclusive category membership. Within each section, studies are further grouped according to the type of task and specific type of learning, working memory, cognitive control, or other executive function being assessed. A Romanian psychologist and chemist named Corneliu Giurgea started using the word nootropic in the 1970s to refer to substances that improve brain function, but humans have always gravitated toward foods and chemicals that make us feel sharper, quicker, happier, and more content. Our brains use about 20 percent of our energy when our bodies are at rest (compared with 8 percent for apes), according to National Geographic, so our thinking ability is directly affected by the calories we're taking in as well as by the nutrients in the foods we eat. Here are the nootropics we don't even realize we're using, and an expert take on how they work. Only two of the eight experiments reviewed in this section found that stimulants enhanced performance, on a nonverbal fluency task in one case and in Raven's Progressive Matrices in the other. The small number of studies of any given type makes it difficult to draw general conclusions about the underlying executive function systems that might be influenced. We reached out to several raw material manufacturers and learned that Phosphatidylserine and Huperzine A are in short supply. We also learned that these ingredients can be pricey, incentivizing many companies to cut corners. A company has to have the correct ingredients in the correct proportions in order for a brain health formula to be effective. We learned that not just having the two critical ingredients was important – but, also that having the correct supporting ingredients was essential in order to be effective. As it happened, Health Supplement Wholesalers (since renamed Powder City) offered me a sample of their products, including their 5g Noopept powder ($13). I'd never used HSW before & they had some issues in the past; but I haven't seen any recent complaints, so I was willing to try them. My 5g from batch #130830 arrived quickly (photos: packaging, powder contents). I tried some (tastes just slightly unpleasant, like an ultra-weak piracetam), and I set about capping the fluffy white flour-like powder with the hilariously tiny scoop they provide. "The author's story alone is a remarkable account of not just survival, but transcendence of a near-death experience. Cavin went on to become an advocate for survival and survivors of traumatic brain injuries, discovering along the way the key role played by nutrition. But this book is not just for injury survivors. It is for anyone who wants to live (and eat) well." Fish oil (Examine.com, buyer's guide) provides benefits relating to general mood (eg. inflammation & anxiety; see later on anxiety) and anti-schizophrenia; it is one of the better supplements one can take. (The known risks are a higher rate of prostate cancer and internal bleeding, but are outweighed by the cardiac benefits - assuming those benefits exist, anyway, which may not be true.) The benefits of omega acids are well-researched. As professionals and aging baby boomers alike become more interested in enhancing their own brain power (either to achieve more in a workday or to stave off cognitive decline), a huge market has sprung up for nonprescription nootropic supplements. These products don't convince Sahakian: "As a clinician scientist, I am interested in evidence-based cognitive enhancement," she says. "Many companies produce supplements, but few, if any, have double-blind, placebo-controlled studies to show that these supplements are cognitive enhancers." Plus, supplements aren't regulated by the U.S. Food and Drug Administration (FDA), so consumers don't have that assurance as to exactly what they are getting. Check out these 15 memory exercises proven to keep your brain sharp. Zach was on his way to being a doctor when a personal health crisis changed all of that. He decided that he wanted to create wellness instead of fight illness. He lost over a 100 lbs through functional nutrition and other natural healing protocols. He has since been sharing his knowledge of nutrition and functional medicine for the last 12 years as a health coach and health educator. Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive. After trying out 2 6lb packs between 12 September & 25 November 2012, and 20 March & 20 August 2013, I have given up on flaxseed meal. They did not seem to go bad in the refrigerator or freezer, and tasted OK, but I had difficulty working them into my usual recipes: it doesn't combine well with hot or cold oatmeal, and when I tried using flaxseed meal in soups I learned flaxseed is a thickener which can give soup the consistency of snot. It's easier to use fish oil on a daily basis. Do you want to try Nootropics, but confused with the plethora of information available online? If that's the case, then you might get further confused about what nootropic supplement you should buy that specifically caters to your needs. Here is a list of the top 10 Nootropics or 10 best brain supplements available in the market, and their corresponding uses: With subtle effects, we need a lot of data, so we want at least half a year (6 blocks) or better yet, a year (12 blocks); this requires 180 actives and 180 placebos. This is easily covered by $11 for Doctor's Best Best Lithium Orotate (5mg), 200-Count (more precisely, Lithium 5mg (from 125mg of lithium orotate)) and $14 for 1000x1g empty capsules (purchased February 2012). For convenience I settled on 168 lithium & 168 placebos (7 pill-machine batches, 14 batches total); I can use them in 24 paired blocks of 7-days/1-week each (48 total blocks/48 weeks). The lithium expiration date is October 2014, so that is not a problem 10:30 AM; no major effect that I notice throughout the day - it's neither good nor bad. This smells like placebo (and part of my mind is going how unlikely is it to get placebo 3 times in a row!, which is just the Gambler's fallacy talking inasmuch as this is sampling with replacement). I give it 60% placebo; I check the next day right before taking, and it is. Man! (In particular, I don't think it's because there's a sudden new surge of drugs. FDA drug approval has been decreasing over the past few decades, so this is unlikely a priori. More specifically, many of the major or hot drugs go back a long time. Bacopa goes back millennia, melatonin I don't even know, piracetam was the '60s, modafinil was '70s or '80s, ALCAR was '80s AFAIK, Noopept & coluracetam were '90s, and so on.) 70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of: I almost resigned myself to buying patches to cut (and let the nicotine evaporate) and hope they would still stick on well enough afterwards to be indistinguishable from a fresh patch, when late one sleepless night I realized that a piece of nicotine gum hanging around on my desktop for a week proved useless when I tried it, and that was the answer: if nicotine evaporates from patches, then it must evaporate from gum as well, and if gum does evaporate, then to make a perfect placebo all I had to do was cut some gum into proper sizes and let the pieces sit out for a while. (A while later, I lost a piece of gum overnight and consumed the full 4mg to no subjective effect.) Google searches led to nothing indicating I might be fooling myself, and suggested that evaporation started within minutes in patches and a patch was useless within a day. Just a day is pushing it (who knows how much is left in a useless patch?), so I decided to build in a very large safety factor and let the gum sit for around a month rather than a single day. A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice. …Four subjects correctly stated when they received nicotine, five subjects were unsure, and the remaining two stated incorrectly which treatment they received on each occasion of testing. These numbers are sufficiently close to chance expectation that even the four subjects whose statements corresponded to the treatments received may have been guessing. The abuse liability of caffeine has been evaluated.147,148 Tolerance development to the subjective effects of caffeine was shown in a study in which caffeine was administered at 300 mg twice each day for 18 days.148 Tolerance to the daytime alerting effects of caffeine, as measured by the MSLT, was shown over 2 days on which 250 g of caffeine was given twice each day48 and to the sleep-disruptive effects (but not REM percentage) over 7 days of 400 mg of caffeine given 3 times each day.7 In humans, placebo-controlled caffeine-discontinuation studies have shown physical dependence on caffeine, as evidenced by a withdrawal syndrome.147 The most frequently observed withdrawal symptom is headache, but daytime sleepiness and fatigue are also often reported. The withdrawal-syndrome severity is a function of the dose and duration of prior caffeine use…At higher doses, negative effects such as dysphoria, anxiety, and nervousness are experienced. The subjective-effect profile of caffeine is similar to that of amphetamine,147 with the exception that dysphoria/anxiety is more likely to occur with higher caffeine doses than with higher amphetamine doses. Caffeine can be discriminated from placebo by the majority of participants, and correct caffeine identification increases with dose.147 Caffeine is self-administered by about 50% of normal subjects who report moderate to heavy caffeine use. In post-hoc analyses of the subjective effects reported by caffeine choosers versus nonchoosers, the choosers report positive effects and the nonchoosers report negative effects. Interestingly, choosers also report negative effects such as headache and fatigue with placebo, and this suggests that caffeine-withdrawal syndrome, secondary to placebo choice, contributes to the likelihood of caffeine self-administration. This implies that physical dependence potentiates behavioral dependence to caffeine. A study mentioned in Neuropsychopharmacology as of August 2002, revealed that Bacopa Monnieri decreases the rate of forgetting newly acquired information, memory consolidations, and verbal learning rate. It also helps in enhancing the nerve impulse transmission, which leads to increased alertness. It is also known to relieve the effects of anxiety and depression. All these benefits happen as Bacopa Monnieri dosage helps in activating choline acetyltransferase and inhibiting acetylcholinesterase which enhances the levels of acetylcholine in the brain, a chemical that is also associated in improving memory and attention. With all these studies pointing to the nootropic benefits of some essential oils, it can logically be concluded then that some essential oils can be considered "smart drugs." However, since essential oils have so much variety and only a small fraction of this wide range has been studied, it cannot be definitively concluded that absolutely all essential oils have brain-boosting benefits. The connection between the two is strong, however. As it happens, these are areas I am distinctly lacking in. When I first began reading about testosterone I had no particular reason to think it might be an issue for me, but it increasingly sounded plausible, an aunt independently suggested I might be deficient, a biological uncle turned out to be severely deficient with levels around 90 ng/dl (where the normal range for 20-49yo males is 249-839), and finally my blood test in August 2013 revealed that my actual level was 305 ng/dl; inasmuch as I was 25 and not 49, this is a tad low. Gamma-aminobutyric acid, also known as GABA, naturally produced in the brain from glutamate, is a neurotransmitter that helps in the communication between the nervous system and brain. The primary function of this GABA Nootropic is to reduce the additional activity of the nerve cells and helps calm the mind. Thus, it helps to improve various conditions, like stress, anxiety, and depression by decreasing the beta brain waves and increasing the alpha brain waves. It is one of the best nootropic for anxiety that you can find in the market today. As a result, cognitive abilities like memory power, attention, and alertness also improve. GABA helps drug addicts recover from addiction by normalizing the brain's GABA receptors which reduce anxiety and craving levels in the absence of addictive substances. Adrafinil is Modafinil's predecessor, because the scientists tested it as a potential narcolepsy drug. It was first produced in 1974 and immediately showed potential as a wakefulness-promoting compound. Further research showed that Adrafinil is metabolized into its component parts in the liver, that is into inactive modafinil acid. Ultimately, Modafinil has been proclaimed the primary active compound in Adrafinil. Those who have taken them swear they do work – though not in the way you might think. Back in 2015, a review of the evidence found that their impact on intelligence is "modest". But most people don't take them to improve their mental abilities. Instead, they take them to improve their mental energy and motivation to work. (Both drugs also come with serious risks and side effects – more on those later). The hormone testosterone (Examine.com; FDA adverse events) needs no introduction. This is one of the scariest substances I have considered using: it affects so many bodily systems in so many ways that it seems almost impossible to come up with a net summary, either positive or negative. With testosterone, the problem is not the usual nootropics problem that that there is a lack of human research, the problem is that the summary constitutes a textbook - or two. That said, the 2011 review The role of testosterone in social interaction (excerpts) gives me the impression that testosterone does indeed play into risk-taking, motivation, and social status-seeking; some useful links and a representative anecdote: One often-cited study published in the British Journal of Pharmacology looked at cognitive function in the elderly and showed that racetam helped to improve their brain function.19 Another study, which was published in Psychopharmacology, looked at adult volunteers (including those who are generally healthy) and found that piracetam helped improve their memory.20 Neuro Optimizer is Jarrow Formula's offering on the nootropic industry, taking a more creative approach by differentiating themselves as not only a nootropic that enhances cognitive abilities, but also by making sure the world knows that they have created a brain metabolizer. It stands out from all the other nootropics out there in this respect, as well as the fact that they've created an all-encompassing brain capsule. What do they really mean by brain metabolizer, though? It means that their capsule is able to supply nutrition… Learn More... Sulbutiamine, mentioned earlier as a cholinergic smart drug, can also be classed a dopaminergic, although its mechanism is counterintuitive: by reducing the release of dopamine in the brain's prefrontal cortex, the density of dopamine receptors actually increase after continued Sulbutiamine exposure, through a compensatory mechanism. (This provides an interesting example of how dividing smart drugs into sensible "classes" is a matter of taste as well as science, especially since many of them create their discernable neural effects through still undefined mechanisms.) Nootropics are becoming increasingly popular as a tool for improving memory, information recall, and focus. Though research has not yet determined the mechanism for how nootropics work, it is clear that they provide significant cognitive benefits. Additionally, through a variety of hypothesized biological mechanisms, these compounds are thought to have the potential to improve vision. There is much to be appreciated in a brain supplement like BrainPill (never mind the confusion that may stem from the generic-sounding name) that combines tried-and-tested ingredients in a single one-a-day formulation. The consistency in claims and what users see in real life is an exemplary one, which convinces us to rate this powerhouse as the second on this review list. Feeding one's brain with nootropics and related supplements entails due diligence in research and seeking the highest quality, and we think BrainPill is up to task. Learn More... Take at 11 AM; distractions ensue and the Christmas tree-cutting also takes up much of the day. By 7 PM, I am exhausted and in a bad mood. While I don't expect day-time modafinil to buoy me up, I do expect it to at least buffer me against being tired, and so I conclude placebo this time, and with more confidence than yesterday (65%). I check before bed, and it was placebo. Smart pills are defined as drugs or prescription medication used to treat certain mental disorders, from milder ones such as brain fog, to some more severe like ADHD. They are often referred to as 'nootropics' but even though the two terms are often used interchangeably, smart pills and nootropics represent two different types of cognitive enhancers. Since my experiment had a number of flaws (non-blind, varying doses at varying times of day), I wound up doing a second better experiment using blind standardized smaller doses in the morning. The negative effect was much smaller, but there was still no mood/productivity benefit. Having used up my first batch of potassium citrate in these 2 experiments, I will not be ordering again since it clearly doesn't work for me. For proper brain function, our CNS (Central Nervous System) requires several amino acids. These derive from protein-rich foods. Consider amino acids to be protein building blocks. Many of them are dietary precursors to vital neurotransmitters in our brain. Epinephrine (adrenaline), serotonin, dopamine, and norepinephrine assist in enhancing mental performance. A few examples of amino acid nootropics are: The goal of this article has been to synthesize what is known about the use of prescription stimulants for cognitive enhancement and what is known about the cognitive effects of these drugs. We have eschewed discussion of ethical issues in favor of simply trying to get the facts straight. Although ethical issues cannot be decided on the basis of facts alone, neither can they be decided without relevant facts. Personal and societal values will dictate whether success through sheer effort is as good as success with pharmacologic help, whether the freedom to alter one's own brain chemistry is more important than the right to compete on a level playing field at school and work, and how much risk of dependence is too much risk. Yet these positions cannot be translated into ethical decisions in the real world without considerable empirical knowledge. Do the drugs actually improve cognition? Under what circumstances and for whom? Who will be using them and for what purposes? What are the mental and physical health risks for frequent cognitive-enhancement users? For occasional users? So the chi-squared believes there is a statistically-significant difference, the two-sample test disagrees, and the binomial also disagrees. Since I regarded it as a dubious theory, can't see a difference, and the binomial seems like the most appropriate test, I conclude that several months of 1mg iodine did not change my eye color. (As a final test, when I posted the results on the Longecity forum where people were claiming the eye color change, I swapped the labels on the photos to see if anyone would claim something along the lines when I look at the photos, I can see a difference!. I thought someone might do that, which would be a damning demonstration of their biases & wishful thinking, but no one did.) From the standpoint of absorption, the drinking of tobacco juice and the interaction of the infusion or concoction with the small intestine is a highly effective method of gastrointestinal nicotine administration. The epithelial area of the intestines is incomparably larger than the mucosa of the upper tract including the stomach, and the small intestine represents the area with the greatest capacity for absorption (Levine 1983:81-83). As practiced by most of the sixty-four tribes documented here, intoxicated states are achieved by drinking tobacco juice through the mouth and/or nose…The large intestine, although functionally little equipped for absorption, nevertheless absorbs nicotine that may have passed through the small intestine. "I enjoyed this book. It was full of practical information. It was easy to understand. I implemented some of the ideas in the book and they have made a positive impact for me. Not only is this book a wealth of knowledge it helps you think outside the box and piece together other ideas to research and helps you understand more about TBI and the way food might help you mitigate symptoms." Methylphenidate, commonly known as Ritalin, is a stimulant first synthesised in the 1940s. More accurately, it's a psychostimulant - often prescribed for ADHD - that is intended as a drug to help focus and concentration. It also reduces fatigue and (potentially) enhances cognition. Similar to Modafinil, Ritalin is believed to reduce dissipation of dopamine to help focus. Ritalin is a Class B drug in the UK, and possession without a prescription can result in a 5 year prison sentence. Please note: Side Effects Possible. See this article for more on Ritalin. Taken together, the available results are mixed, with slightly more null results than overall positive findings of enhancement and evidence of impairment in one reversal learning task. As the effect sizes listed in Table 5 show, the effects when found are generally substantial. When drug effects were assessed as a function of placebo performance, genotype, or self-reported impulsivity, enhancement was found to be greatest for participants who performed most poorly on placebo, had a COMT genotype associated with poorer executive function, or reported being impulsive in their everyday lives. In sum, the effects of stimulants on cognitive control are not robust, but MPH and d-AMP appear to enhance cognitive control in some tasks for some people, especially those less likely to perform well on cognitive control tasks. The question of how much nonmedical use of stimulants occurs on college campuses is only partly answered by the proportion of students using the drugs in this way. The other part of the answer is how frequently they are used by those students. Three studies addressed this issue. Low and Gendaszek (2002) found a high past-year rate of 35.3%, but only 10% and 8% of this population used monthly and weekly, respectively. White et al. (2006) found a larger percentage used frequently: 15.5% using two to three times per week and 33.9% using two to three times per month. Teter et al. (2006) found that most nonmedical users take prescription stimulants sporadically, with well over half using five or fewer times and nearly 40% using only once or twice in their lives. DeSantis et al. (2008) offered qualitative evidence on the issue, reporting that students often turned to stimulants at exam time only, particularly when under pressure to study for multiple exams at the same time. Thus, there appears to be wide variation in the regularity of stimulant use, with the most common pattern appearing to be infrequent use. Legal issues aside, this wouldn't be very difficult to achieve. Many companies already have in-house doctors who give regular health check-ups — including drug tests — which could be employed to control and regulate usage. Organizations could integrate these drugs into already existing wellness programs, alongside healthy eating, exercise, and good sleep. But there are some potential side effects, including headaches, anxiety and insomnia. Part of the way modafinil works is by shifting the brain's levels of norepinephrine, dopamine, serotonin and other neurotransmitters; it's not clear what effects these shifts may have on a person's health in the long run, and some research on young people who use modafinil has found changes in brain plasticity that are associated with poorer cognitive function. In the largest nationwide study, McCabe et al. (2005) sampled 10,904 students at 119 public and private colleges and universities across the United States, providing the best estimate of prevalence among American college students in 2001, when the data were collected. This survey found 6.9% lifetime, 4.1% past-year, and 2.1% past-month nonmedical use of a prescription stimulant. It also found that prevalence depended strongly on student and school characteristics, consistent with the variability noted among the results of single-school studies. The strongest predictors of past-year nonmedical stimulant use by college students were admissions criteria (competitive and most competitive more likely than less competitive), fraternity/sorority membership (members more likely than nonmembers), and gender (males more likely than females). Barbara Sahakian, a neuroscientist at Cambridge University, doesn't dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. "Proving safety and efficacy is needed," she says. Some cognitive enhancers, such as donepezil and galantamine, are prescribed for elderly patients with impaired reasoning and memory deficits caused by various forms of dementia, including Alzheimer disease, Parkinson disease with dementia, dementia with Lewy bodies, and vascular dementia. Children and young adults with attention-deficit/hyperactivity disorder (ADHD) are often treated with the cognitive enhancers Ritalin (methylphenidate) or Adderall (mixed amphetamine salts). Persons diagnosed with narcolepsy find relief from sudden attacks of sleep through wake-promoting agents such as Provigil (modafinil). Generally speaking, cognitive enhancers improve working and episodic (event-specific) memory, attention, vigilance, and overall wakefulness but act through different brain systems and neurotransmitters to exert their enhancing effects. As far as anxiety goes, psychiatrist Emily Deans has an overview of why the Kiecolt-Glaser et al 2011 study is nice; she also discusses why fish oil seems like a good idea from an evolutionary perspective. There was also a weaker earlier 2005 study also using healthy young people, which showed reduced anger/anxiety/depression plus slightly faster reactions. The anti-stress/anxiolytic may be related to the possible cardiovascular benefits (Carter et al 2013). "In the hospital and ICU struggles, this book and Cavin's experience are golden, and if we'd have had this book's special attention to feeding tube nutrition, my son would be alive today sitting right here along with me saying it was the cod liver oil, the fish oil, and other nutrients able to be fed to him instead of the junk in the pharmacy tubes, that got him past the liver-test results, past the internal bleeding, past the brain difficulties controlling so many response-obstacles back then. Back then, the 'experts' in rural hospitals were unwilling to listen, ignored my son's unexpected turnaround when we used codliver oil transdermally on his sore skin, threatened instead to throw me out, but Cavin has his own proof and his accumulated experience in others' journeys. Cavin's boxed areas of notes throughout the book on applying the brain nutrient concepts in feeding tubes are powerful stuff, details to grab onto and run with… hammer them! Finally, a workforce high on stimulants wouldn't necessarily be more productive overall. "One thinks 'are these things dangerous?' – and that's important to consider in the short term," says Huberman. "But there's also a different question, which is: 'How do you feel the day afterwards?' Maybe you're hyper-focused for four hours, 12 hours, but then you're below baseline for 24 or 48." Nevertheless, a drug that improved your memory could be said to have made you smarter. We tend to view rote memory, the ability to memorize facts and repeat them, as a dumber kind of intelligence than creativity, strategy, or interpersonal skills. "But it is also true that certain abilities that we view as intelligence turn out to be in fact a very good memory being put to work," Farah says. Take quarter at midnight, another quarter at 2 AM. Night runs reasonably well once I remember to eat a lot of food (I finish a big editing task I had put off for weeks), but the apathy kicks in early around 4 AM so I gave up and watched Scott Pilgrim vs. the World, finishing around 6 AM. I then read until it's time to go to a big shotgun club function, which occupies the rest of the morning and afternoon; I had nothing to do much of the time and napped very poorly on occasion. By the time we got back at 4 PM, the apathy was completely gone and I started some modafinil research with gusto (interrupted by going to see Puss in Boots). That night: Zeo recorded 8:30 of sleep, gap of about 1:50 in the recording, figure 10:10 total sleep; following night, 8:33; third night, 8:47; fourth, 8:20 (▇▁▁▁). AMP was first investigated as an asthma medication in the 1920s, but its psychological effects were soon noticed. These included increased feelings of energy, positive mood, and prolonged physical endurance and mental concentration. These effects have been exploited in a variety of medical and nonmedical applications in the years since they were discovered, including to treat depression, to enhance alertness in military personnel, and to provide a competitive edge in athletic competition (Rasmussen, 2008). Today, AMP remains a widely used and effective treatment for ADHD (Wilens, 2006). Racetams are often used as a smart drug by finance workers, students, and individuals in high-pressure jobs as a way to help them get into a mental flow state and work for long periods of time. Additionally, the habits and skills that an individual acquires while using a racetam can still be accessed when someone is not taking racetams because it becomes a habit.
CommonCrawl
Search all SpringerOpen articles Energy Informatics Investigating the performance gap between testing on real and denoised aggregates in non-intrusive load monitoring Christoph Klemenjak ORCID: orcid.org/0000-0002-0113-63511, Stephen Makonin2 & Wilfried Elmenreich1 Energy Informatics volume 4, Article number: 3 (2021) Cite this article Prudent and meaningful performance evaluation of algorithms is essential for the progression of any research field. In the field of Non-Intrusive Load Monitoring (NILM), performance evaluation can be conducted on real-world aggregate signals, provided by smart energy meters or artificial superpositions of individual load signals (i.e., denoised aggregates). It has long been suspected that testing on these denoised aggregates provides better evaluation results mainly due to the fact that the signal is less complex. Complexity in real-world aggregate signals increases with the number of unknown/untracked loads. Although this is a known performance reporting problem, an investigation into the actual performance gap between real and denoised testing is still pending. In this paper, we examine the performance gap between testing on real-world and denoised aggregates with the aim of bringing clarity into this matter. Starting with an assessment of noise levels in datasets, we find significant differences in test cases. We give broad insights into our evaluation setup comprising three load disaggregation algorithms, two of them relying on neural network architectures. The results presented in this paper, based on studies covering three scenarios with ascending noise levels, show a strong tendency towards load disaggregation algorithms providing significantly better performance on denoised aggregate signals. A closer look at the outcome of our studies reveals that all appliance types could be subject to this phenomenon. We conclude the paper by discussing aspects that could be causing these considerable gaps between real and denoised testing in NILM. Effective energy management in smart grids requires a fair amount of monitoring and controlling of electrical load to achieve optimal energy utilization and, ultimately, reduce energy consumption (Gopinath et al. 2020). With regard to individual buildings, load monitoring can be implemented in an intrusive or non-intrusive fashion. The latter is often referred to as Non-Intrusive Load Monitoring (NILM) or load disaggregation. NILM, dating back to the seminal work presented in Hart (1992), comprises a set of techniques to identify active electrical appliance signals from the aggregate load signal reported by a smart meter (Salem et al. 2020). Performance evaluation of NILM algorithms can be carried out in a noised or denoised manner, where the difference lies in the aggregate signal considered as input. Whereas noised scenarios employ signals (i.e. time series) obtained from smart meters, denoised testing scenarios consider superpositions of individual appliance signals (i.e., denoised aggregates). Figures 1 and 2 illustrate such real and denoised signals for two households found in NILM datasets. Depending on how many appliance signals are considered when deriving a denoised aggregate, there can be test scenarios in which the denoised aggregate differs considerably from its real-world counterpart, as shown in Fig. 2. Real and denoised aggregate in the case of UK-DALE house 5 Real and denoised aggregate in the case of REFIT house 2 While a large proportion of contributions proposed for NILM is being evaluated following noised testing scenarios, exceptions to this unwritten rule can be observed (Wittmann et al. 2018). The problem with this matter lies in the complexity of the test setup, as denoised aggregates are suspected to pose simpler disaggregation problems (Makonin and Popowich 2015). Consequently, our hypothesis claims that the same disaggregation algorithm applied to the denoised signal version of a real-world aggregate signal results in considerably better performance, thus communicating a distorted picture of the capabilities of the presented algorithm. This paper presents a study focusing on the difference between denoised and real-world signal testing scenarios in the context of performance evaluation in NILM. We consider data of 15 appliances extracted from three datasets. Each dataset reports an aggregate signal with additional residual noise. For testing, we select households with different levels of residual noise. We incorporate one basic and two load disaggregation approaches based on neural networks to obtain a broad understanding of whether or not noise levels of aggregate power signals impact energy estimation performance. Finally, we discuss how the disaggregation performance is affected by signal noise levels with regard to different appliance types. Despite the possibly far-reaching implications of this aspect for NILM, relatively little is understood about the actual performance gap between real and denoised testing. In Makonin and Popowich (2015), the hypothesis of denoised testing resulting in better performance was expressed first. Further, the authors introduce a measure to assess the noise level of aggregate signals. This measure has found application in a limited number of studies, in which the noise level was reported alongside the performance of load disaggregation algorithms on real-world aggregates (Makonin et al. 2015; Zhao et al. 2018). However, no comparison to the denoised testing case has been conducted. In Klemenjak et al. (2020), the noise levels of several NILM datasets were determined. The authors report basic parameters of several NILM datasets and find that noise levels in real aggregate signals vary significantly among observed datasets. Few attempts have been made to evaluate NILM algorithms on both, real and denoised aggregates, such as presented for the AFAMAP approach in Bonfigli et al. (2017). In subsequent work (Bonfigli et al. 2018), an improved version of denoising autoencoders for NILM has been proposed by means of comparison studies to the state of the art at that time. Although the authors have not investigated the performance gap between real and denoised, a tendency can be derived for this particular case in both contributions, confirming the motivation for the studies presented in this paper. Assessing signal noise levels NILM has been approached in various ways that can be categorized into event detection and energy estimation approaches (Pereira and Nunes 2018). In the following, we focus on the energy estimation viewpoint as the precursor of the event detection stage in the disaggregation process. We define NILM as the problem of generating estimates \(\left [\hat {x}_{t}^{(1)}, \dots,\hat {x}_{t}^{(M)}\right ]\) of the actual power consumption \(\left [x_{t}^{(1)}, \dots,x_{t}^{(M)}\right ]\) of M electrical appliances at time t given only the aggregated power consumption yt, where the aggregate power signal yt consists of $$ y_{t} = \sum_{i=1}^{M}{x_{t}^{(i)}}+ \eta_{t} $$ that is M appliance-level signals \(x_{t}^{(i)}\) and a residual term ηt. The residual term comprises (measurement) noise, unmetered electrical load, and unexpected or unaccounted anomalies (Makonin and Popowich 2015). To quantify the share of the residual term in an aggregate signal, the noise-aggregate ratio NAR, defined as: $$ \text{NAR} = \frac{\sum_{t=1}^{T}{\eta_{t}}}{\sum_{t=1}^{T}{y_{t}}} = \frac{\sum_{t=1}^{T}{|y_{t}-\sum_{i=1}^{M}{x_{t}^{(i)}}| }}{\sum_{t=1}^{T}{y_{t}}} $$ was introduced in Makonin and Popowich (2015). This ratio can be computed for any type of power signal, provided that readings of the aggregate and individual appliances are available. A NAR of 0.15 indicates that 15% of the observed power signal can be attributed to the residual term. Hence, the ratio indicates to what degree information on the aggregate's components is available. To get an impression of NAR levels to be expected in real-world settings, we compute this ratio for household measurements contained in the energy datasets AMPds2 (Makonin et al. 2016), COMBED (Batra et al. 2014a), ECO (Beckel et al. 2014), iAWE (Batra et al. 2013), REFIT (Murray et al. 2017), and UK-DALE (Kelly and Knottenbelt 2015a). As intended by the authors of (Makonin and Popowich 2015), we consider all submeter signals recorded during the measurement campaign to compute the NAR. These datasets were selected because of their compatibility to NILMTK, a toolkit that enables reproducible NILM experiments (Batra et al. 2014b; Batra et al. 2019). We excluded from consideration the dataset BLUED (Anderson et al. 2012) due to the lack of sub-metered power data, Tracebase (Reinhardt et al. 2012) and GREEND (Monacchi et al. 2014) due to the lack of household aggregate power data. We summarize the derived values in Table 1 in conjunction with further stats on the measurement campaign such as duration or number of submeters. Table 1 Noise levels in NILM datasets Generally speaking, measurement campaigns strive to record the energy consumption and other parameters of interest in households or industrial facilities over a certain time period. Though sharing this common aim, considerable differences can be observed in the way past campaigns have been conducted. As Table 1 shows, durations range from a couple of days to several years of data, which impacts the amount of appliance activations and events found in the final dataset. Further, we identify considerable variations with regard to AC power types as well as the number of submeters installed during campaigns. It should be pointed out that there seems to be a lack of consistency in the sense that not only measurement setups differ between two datasets but also within some of the campaigns considered by our comparison (e.g., UK-DALE). As concerns the noise aggregate ratio (NAR), we observe considerable variations across datasets and households. Interestingly, the NAR ranges between a few percent, as it is the case for household 2 in the ECO dataset, and excessive 84.7% in household 5 of same dataset. Further, there are indications that the number of submeters used in the course of dataset collection can but do not necessarily have an impact on the noise level of the household's aggregate signal since it is decisive what kind of appliances are left out during a measurement campaign (low-power appliances vs. big consumers). As concerns house 1 to house 5 in REFIT, we consistently observe moderate to high noise levels, which may be the result of the low number of submeters incorporated in the measurement campaign. On the other hand, it should be noted that the measurement campaign conducted to obtain REFIT shows remarkable consistency in the sense that the exact same number of submeters has been applied to every single household in the study and, more importantly, the same AC power type has been considered at aggregate and appliance level at every site. In contrast to that, Table 1 reveals that in the case of house 3 and 4 in UK-DALE, apparent power was recorded on aggregate level, whereas active power was considered on appliance level only. As our definition of NAR demands for the same AC power type on aggregate and submeter level, no such ratio could be computed in those cases. The same applies to all sites of the REDD (Kolter and Johnson 2011) dataset, according to the NILMTK dataset converterFootnote 1. For this reason, REDD has not been considered in this study. Evaluation setup To gain a comprehensive understanding of the impact of noise on the disaggregation performance of algorithms, we selected three households with ascending levels of residual noise: household 2 of the ECO dataset (Beckel et al. 2014) with a NAR of 5.9%, household 5 of the UK-DALE dataset (Kelly and Knottenbelt 2015a) with a NAR 27.5%, and household 2 of the REFIT dataset (Murray et al. 2017) with a NAR of 65.1%. This way, we incorporate one instance each for disaggregation problems with low, moderate, and high noise levels. We selected five electrical appliances for every household considering a wide range of appliance types. We extracted 244 days for ECO, 82 days for UK-DALE and 275 days for REFIT while applying a sampling interval of 10 s. Table 2 provides further information on training and test sets. The amount of data used per household was governed by availability in the case of ECO and UK-DALE, as can be learned from Table 1. We split datasets into training set, validation set, and test set. This splitting was applied to all three households. We considered the smart meter signal as present in datasets and obtained the denoised version of the aggregate by superposition of the individual appliance signals following: $$ y_{t} = \sum_{i=1}^{M}{x_{t}^{(i)}} $$ Table 2 Details on intervals and dataset splitting It should be noted that while deriving the denoised aggregate of a household, we considered all appliance signals available in the respective dataset. For instance, the denoised aggregate in the case of UK-DALE's house 5 is found by superposition of 24 appliance signals, as can be learned from Table 1. For experimental evaluations, we utilize the latest version of NILMTK. The toolkit integrates several basic benchmark algorithms as well as load disaggregation algorithms based on Deep Neural Networks (DNN). In the course of experiments, we consider the traditional CO approach and two approaches based on DNNs: The Combinatorial Optimization (CO) algorithm, introduced in Hart (1992), has been used repeatedly in literature to serve as baseline (Batra et al. 2019). The CO algorithm estimates the power demand of appliances and their operational mode. Similar to the Knapsack problem (Rodriguez-Silva and Makonin 2019), estimation is performed by finding the combination of concurrently active appliances that minimizes the difference between aggregate signal and the sum of power demands. Recurrent Neural Networks are a subclass of neural networks that have been developed to process time series and related sequential data (Di Pietro and Hager 2019). First proposed for NILM in Kelly and Knottenbelt (2015b), we employ the implementation presented in Krystalakos et al. (2018), which incorporates Long Short-Term Memory (LSTM) cells. Provided a sequence of aggregate readings as input, the RNN estimates the power consumption of the electrical appliance it was trained to detect for each newly observed input sample. The Sequence-to-point (S2P) technique, relying on convolutional neural networks, follows a sliding window approach in which the network predicts the midpoint element of an output time window based on an input sequence consisting of aggregate power readings (Zhang et al. 2018). The basic idea behind this method is to implement a non-linear regression between input window and midpoint element, which has been applied successfully for speech and image processing (van den Oord et al. 2016). In a recent benchmarking study of NILM approaches, S2P was observed to be amongst the most advanced disaggregation techniques at that time (Reinhardt and Klemenjak 2020). While the CO approach does not need to be parametrized, we set the number of training epochs to 25 during training of neural networks. Further, we employ an input sequence length of 49 for LSTM inspired by Krystalakos et al. (2018) and 99 for S2P as suggested in Batra et al. (2019). In this study, we utilize two error metrics to assess the performance of load disaggregation algorithms. The first is a well-known, common metric used in signal processing, the Mean Absolute Error (MAE), defined as: $$ \text{MAE}^{(i)} = \frac{1}{T} \cdot \sum_{t=1}^{T}{\lvert{ \hat{x}_{t}^{(i)}-x_{t}^{(i)}\rvert}} $$ where xt is the actual power consumption, \(\hat {x}_{t}\) the estimated power consumption, and T represents the number of samples. The best possible value is zero and, as we estimate the power consumption of appliances, it is measured in Watts. As second metric, we incorporate a metric defined by NILM scholars in Kolter and Jaakkola (2012), the Normalized Disaggregation Error (NDE), defined as: $$ \text{NDE}^{(i)} = \sqrt{\frac{\sum_{t=1}^{T}{\left(\hat{x}_{t}^{(i)}-x_{t}^{(i)}\right)^{2}}}{\sum_{t=1}^{T}{\left(x_{t}^{(i)}\right)^{2}}}} $$ In contrast to the MAE, the NDE represents a dimensionless metric and, more importantly, the NDE belongs to the class of normalized metrics. This allows for fair comparisons of disaggregation performance between appliance types (Klemenjak et al. 2020). We summarize the outcome of our investigations in Table 3 for the MAE and Table 4 with regard to the NDE. For several appliances per household, we compare the disaggregation performance of CO, LSTM, and S2P when applied to the real-world aggregate signal, denoted as Real, and the denoised aggregate signal Den, respectively. Table 3 Mean absolute error (MAE) in Watts for real and denoised testing Table 4 Normalised disaggregation error (NDE) for real and denoised testing In virtually all cases, we observe a strong tendency towards disaggregation algorithms providing better performance on denoised aggregate signals. In the context of error metrics such as MAE and NDE this means that the error observed on the real aggregate is larger than the error on the denoised aggregate. This holds true for almost all households and appliances considered, though some exceptions were identified: we spot a few cases in Table 3, namely the fridge and kettle in ECO as well as the dishwasher in UK-DALE showing the opposite trend for the CO algorithm. Same applies to all fridges with regard to the NDE metric, as Table 4 reports. It should be pointed out that in those cases, the performance of CO on the real-world and denoised aggregate signal shows a considerable gap when compared to LSTM and S2P. Therefore and because of CO being a trivial benchmarking algorithm, we claim that these cases can be neglected. As concerns LSTM and S2P, we identify a single contradictory observation, namely in the case of the fridge in ECO's household 2. In this particular case, we observed that testing on the real-world aggregate signal results in marginally better performance. One explanation for this could be the extremely low NAR in this scenario, 5.9%, and the fridge belonging to the category of appliances with a recurrent pattern (Reinhardt and Klemenjak 2020). Having identified a clear tendency towards CO, LSTM, and S2P providing significantly better performance in the denoised signal case i.e. lower MAE and NDE, we draw our attention to the open question whether or not there exists a link between noise level and the magnitude of the performance gap between Real and Den. To investigate further in this, we define the performance gap to be the distance between the error on the real aggregate signal and the error observed signal when testing on the denoised aggregate signal: $$ \Delta\text{MAE} = \text{MAE}_{\text{real}} - \text{MAE}_{\text{denoised}} $$ $$ \Delta\text{NDE} = \text{NDE}_{\text{real}} - \text{NDE}_{\text{denoised}} $$ We derive ΔMAE for the cases presented in Table 3 and illustrate an excerpt of found gaps in Fig. 3 for ECO, Fig. 4 for UK-DALE, and Fig. 5 for REFIT, where the focus of this discussion lies on the two approaches based on neural networks. Performance gap with regard to MAE for ECO house 2 Performance gap with regard to MAE for UK-DALE house 5 Performance gap with regard to MAE for REFIT house 2 We observe clear gaps for both NILM approaches based on neural nets, LSTM and S2P. The illustrations show that neither approach seems to be resilient to noise. This is particularly interesting as approaches relying on LSTM cells as well as sequence-to-sequence learning have received increased interest lately (Reinhardt and Klemenjak 2020; Kaselimi et al. 2019; Kaselimi et al. 2020; Mauch and Yang 2015; Wang et al. 2019). Further, we identify higher performance gaps in test cases on REFIT's house 2 compared to house 5 of UK-DALE in this study. This is particularly apparent when comparing the performance gap for the dishwasher across households, where we measure a ΔMAE many times higher in case of REFIT. Also, we observe performance gaps twice as high for the fridge on REFIT compared to UK-DALE. The only exception to this trend represents the case of LSTM for washing machines, where the performance gap of the LSTM network is smaller on REFIT than on UK-DALE. Nevertheless, it should be stressed that comparisons based on not-normalized metrics can, but not have to be, misleading in some cases since two appliances of the same kind (i.e., two dishwashers) may differ significantly in terms of power consumption. Furthermore, metrics are designed to measure specific aspects of algorithms and hence, considering several metrics during performance evaluation results in a broader understanding of the capabilities of algorithms. For these reasons, we also derived performance gaps with regard to NDE, ΔNDE, for the test cases presented in Table 4 and illustrate derived gaps in Fig. 6 for UK-DALE and Fig. 7 for REFIT. Performance gap with regard to NDE for UK-DALE house 5 Performance gap with regard to NDE for REFIT house 2 In the case of fridges, we observe substantially lower performance gaps on UK-DALE for both networks. We suspect that is a result of the comparably high amount of noise in REFIT 2, disaggregating the real-world aggregate signal represents a bigger challenge than in the case of the denoised counterpart, especially when estimating the power consumption of low-power household appliances such as fridges. Interestingly, not only we observe considerable performance gaps when estimating the power consumption of low-power appliances but also for appliances with moderate or high power consumption such as dishwashers and washing machines, as can be learned from Figs. 8 and 9. In both cases, UK-DALE and REFIT, we measure the highest ΔNDE in the case of the dishwasher. A comparison of performance gaps for dishwashers in Fig. 8 reveals that while we measure similar performance gaps in UK-DALE and REFIT, the performance gap in the case of ECO is significantly smaller. We hypothesize this is the result of the marginal noise level measured in house 2 of ECO. More importantly, we observe that also in cases of marginal noise levels, an apparent difference in terms of disaggregation error can be observed between real and denoised testing in this example. Performance gap with regard to NDE for dishwashers Performance gap with regard to NDE for washing machines A recent benchmarking study involving eight disaggregation algorithms found that S2P outperformed competing neural network architectures and concluded that S2P ranks amongst the most promising NILM approaches (Reinhardt and Klemenjak 2020). As concerns performance of NILM algorithms interpreted as disaggregation error between estimated power consumption and true power consumption of appliances, we find that S2P outperforms LSTM in 11 of 15 cases for the MAE metric and in 14 of 15 cases when the NDE metric is considered. Furthermore, in the vast majority of test runs, the S2P approach shows lower performance gaps than the network composed of LSTM cells in the sense of ΔMAE and ΔNDE. Insights obtained from testing on three households with considerably different NAR levels reveal that in the majority of test runs, testing on the denoised aggregate signal leads to substantially lower estimation errors and therefore, higher estimation accuracy. A few cases showing the contrary trend were observed but can be reasonably explained. As this apparent performance gap can be attributed to a variety of aspects, we suspect two of them having a decisive impact on this matter: First, denoised aggregates are obtained by superposition of individual appliance signals. As such, they contain fewer appliance activations and consumption patterns than aggregates obtained from smart meters, respectively. Particularly when estimating the power consumption of low-power appliances, such activations have the potential to hinder load disaggregation algorithms from providing accurate power consumption estimates. Such cases were repeatedly observed during our studies on REFIT, where a NAR of 65.1% was measured. As depicted in Figs. 10 and 11, we detected several cases where concurrent operation of appliances with moderate or high power consumption (i.e. dishwasher, electric stove, or washing machine) resulted in significant deviations when estimating the power consumption of the fridge. Not only we observed such cases for the basic benchmarking algorithm CO but also for the advanced NILM approaches LSTM and S2P, which leads to the presumption that though having seen remarkable advances in the state of the art, at least a part of those algorithms may still be prone to noise levels in aggregate signals. An excerpt of estimates provided by S2P for the fridge in REFIT house 2 when applied to the real aggregate An excerpt of estimates provided by S2P for the fridge in REFIT house 2 when applied to the denoised aggregate Second, we observe a substantially higher number of false positive estimates in predictions based on real-world aggregate signals than in estimates generated from denoised aggregate signals. False positives in this context mean that the NILM algorithms predicted the appliance to consume energy at times this was not the case. Such false positives impact the outcome of performance evaluations two-fold, as they increase the disaggregation error and decrease the estimation accuracy of NILM algorithms, respectively. We observed repeatedly that in the real-world case, the number of false-positive estimates is considerably higher than in the denoised case. We presume that those false positives are the result of algorithms confusing appliances with similar power consumption levels. Based on the insights gained in this study, we can, however, not confirm a clear link between noise level, measured in NAR, and the magnitude of the performance gap between testing on real and denoised aggregates. We suspect this is due to the fact that every load disaggregation problem bears individual challenges to load disaggregation algorithms, making a comparison between moderate and high noise levels cumbersome. Though such a positive correlation between noise level and the magnitude of the performance gap could not be confirmed by our evaluation, we demonstrated that it has to be expected that testing on denoised aggregates results in lower disaggregation errors in the majority of test runs. Yet, we would like to stress the need for further investigation into the complexity of load disaggregation problems. Motivated by the use of both, real and denoised aggregates in the evaluation of NILM algorithms in related work, we have investigated the performance gap observed between artificial sums of individual signals and signals obtained from real power meters. First, we utilized a noise measure, the noise-aggregate ratio NAR, to determine the noise level of real-world aggregate signals found in energy datasets. We find that noise levels vary substantially between households. We give insights on the experimental setup employed in our studies, comprising one basic and two more advanced NILM algorithms applied to data from three households with ascending noise levels. Our results show that a significant performance gap between the real and the denoised signal testing case can be identified in virtually all evaluation runs, provided a sufficiently high noise-aggregate ratio. Though some exceptions were observed, those cases can be well explained. Hence, we claim that testing on denoised aggregate signals can lead to a distorted image of the actual capabilities of load disaggregation algorithms in some cases, and ideally, its application should be well-considered when developing algorithms for real-world settings. All datasets used in the course of our experiments are publicly available and can be obtained from the respective project homepage. NILMTK, the toolkit employed to conduct studies, can be downloaded from Github: https://github.com/nilmtk/nilmtk https://github.com/nilmtk/nilmtk/tree/master/nilmtk/dataset_converters/redd/metadata AFAMAP: Additive factorial approximate maximum a posteriori Combinatorial optimization DNN: Deep neural network ECO: Electricity consumption and occupancy dataset MAE: NAR: Noise aggregate ratio NDE: Normalized disaggregation error NILM: Non-intrusive load monitoring NILMTK: Non-intrusive load monitoring toolkit LSTM: Long short-term memory REDD: Residential energy disaggregation dataset S2P: Sequence-to-point learning Anderson, K, Ocneanu A, Benitez D, Carlson D, Rowe A, Berges M (2012) BLUED: A fully labeled public dataset for event-based non-intrusive load monitoring research In: Proceedings of the 2nd KDD Workshop on Data Mining Applications in Sustainability (SustKDD), 1–5.. ACM, Beijing. Batra, N, Gulati M, Singh A, Srivastava MB (2013) It's different: Insights into home energy consumption in India In: Proceedings of the 5th ACM Workshop on Embedded Systems For Energy-Efficient Buildings, 1–8. Batra, N, Kelly J, Parson O, Dutta H, Knottenbelt W, Rogers A, Singh A, Srivastava M (2014b) NILMTK: an open source toolkit for non-intrusive load monitoring In: Proceedings of the 5th International Conference on Future Energy Systems, 265–276. Batra, N, Kukunuri R, Pandey A, Malakar R, Kumar R, Krystalakos O, Zhong M, Meira P, Parson O (2019) Towards reproducible state-of-the-art energy disaggregation In: Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, 193–202. Batra, N, Parson O, Berges M, Singh A, Rogers A (2014a) A comparison of non-intrusive load monitoring methods for commercial and residential buildings. arXiv preprint arXiv:1408.6595:1–11. Beckel, C, Kleiminger W, Cicchetti R, Staake T, Santini S (2014) The ECO data set and the performance of non-intrusive load monitoring algorithms In: Proceedings of the 1st ACM Conference on Embedded Systems for Energy-Efficient Buildings, 80–89. Bonfigli, R, Felicetti A, Principi E, Fagiani M, Squartini S, Piazza F (2018) Denoising autoencoders for non-intrusive load monitoring: improvements and comparative evaluation. Energy and Build 158:1461–1474. Bonfigli, R, Principi E, Fagiani M, Severini M, Squartini S, Piazza F (2017) Non-intrusive load monitoring by using active and reactive power in additive Factorial Hidden Markov Models. Appl Energy 208:1590–1607. Di Pietro, R, Hager G (2019) Handbook of medical image computing and computer assisted intervention. Chapter 21:503–519. Gopinath, R, Kumar M, Joshua CPC, Srinivas K (2020) Energy management using non-intrusive load monitoring techniques-State-of-the-art and future research directions. Sust Cities Soc 62:102411. Hart, GW (1992) Nonintrusive appliance load monitoring. Proc IEEE 80(12):1870–91. Kaselimi, M, Doulamis N, Doulamis A, Voulodimos A, Protopapadakis E (2019) Bayesian-optimized bidirectional LSTM regression model for non-intrusive load monitoring In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2747–2751.. IEEE, Brighton. Kaselimi, M, Doulamis N, Voulodimos A, Protopapadakis E, Doulamis A (2020) Context aware energy disaggregation using adaptive bidirectional LSTM models. IEEE Trans Smart Grid 11(4):3054–67. Kelly, J, Knottenbelt W (2015a) The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes. Sci Data 2(1):1–14. Kelly, J, Knottenbelt W (2015b) Neural NILM: Deep neural networks applied to energy disaggregation In: Proceedings of the 2nd ACM International conference on embedded systems for energy-efficient built environments (BuildSys), 55–64. Klemenjak, C, Makonin S, Elmenreich W (2020) Towards comparability in non-intrusive load monitoring: on data and performance evaluation In: 2020 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), 1–5. Kolter, JZ, Jaakkola T (2012) Approximate inference in additive factorial hmms with application to energy disaggregation In: Artificial Intelligence and Statistics, 1472–1482. Kolter, JZ, Johnson MJ (2011) Redd: A public data set for energy disaggregation research In: Workshop on Data Mining Applications in Sustainability (SIGKDD), San Diego, CA, 59–62. Krystalakos, O, Nalmpantis C, Vrakas D (2018) Sliding window approach for online energy disaggregation using artificial neural networks In: Proceedings of the 10th Hellenic Conference on Artificial Intelligence (SETN), 1–6. Makonin, S, Ellert B, Bajic IV, Popowich F (2016) Electricity, water, and natural gas consumption of a residential house in Canada from 2012 to 2014. Sci Data 3(160037):1–12. Makonin, S, Popowich F (2015) Nonintrusive load monitoring (NILM) performance evaluation. Energy Efficiency 8(4):809–814. Makonin, S, Popowich F, Bajić IV, Gill B, Bartram L (2015) Exploiting HMM sparsity to perform online real-time nonintrusive load monitoring. IEEE Trans Smart Grid 7(6):2575–2585. Mauch, L, Yang B (2015) A new approach for supervised power disaggregation by using a deep recurrent LSTM network In: 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 63–67.. IEEE, Orlando. Monacchi, A, Egarter D, Elmenreich W, D'Alessandro S, Tonello AM (2014) GREEND: An energy consumption dataset of households in Italy and Austria In: 2014 IEEE International Conference on Smart Grid Communications (SmartGridComm), 511–516. Murray, D, Stankovic L, Stankovic V (2017) An electrical load measurements dataset of united kingdom households from a two-year longitudinal study. Sci Data 4(1):1–12. Pereira, L, Nunes N (2018) Performance evaluation in non-intrusive load monitoring: Datasets, metrics, and tools–A review. Wiley Interdiscip Rev Data Min Knowl Disc 8(6):1265. Reinhardt, A, Baumann P, Burgstahler D, Hollick M, Chonov H, Werner M, Steinmetz R (2012) On the accuracy of appliance identification based on distributed load metering data In: 2012 Sustainable Internet and ICT for Sustainability (SustainIT), 1–9.. IEEE, Pisa. Reinhardt, A, Klemenjak C (2020) How does load disaggregation performance depend on data characteristics? insights from a benchmarking study In: Proceedings of the Eleventh ACM International Conference on Future Energy Systems, 167–177.. Association for Computing Machinery, New York, NY, USA. Rodriguez-Silva, A, Makonin S (2019) Universal Non-Intrusive Load Monitoring (UNILM) Using Filter Pipelines, Probabilistic Knapsack, and Labelled Partition Maps In: 2019 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC), 1–6. Salem, H, Sayed-Mouchaweh M, Tagina M (2020) A Review on Non-intrusive Load Monitoring Approaches Based on Machine Learning. Springer, Cham. van den Oord, A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kalchbrenner N, Senior A, Kavukcuoglu K (2016) Wavenet: A generative model for raw audio In: 9th ISCA Speech Synthesis Workshop, 125–125. Wang, K, Zhong H, Yu N, Xia Q (2019) Nonintrusive load monitoring based on sequence-to-sequence model with attention mechanism In: Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 75–83. Wittmann, FM, López JC, Rider MJ (2018) Nonintrusive load monitoring algorithm using mixed-integer linear programming. IEEE Trans Consum Electron 64(2):180–187. Zhang, C, Zhong M, Wang Z, Goddard N, Sutton C (2018) Sequence-to-point learning with neural networks for non-intrusive load monitoring In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2604–2611. Zhao, B, He K, Stankovic L, Stankovic V (2018) Improving event-based non-intrusive load monitoring using graph signal processing. IEEE Access 6:53944–53959. The authors acknowledge the financial support by the University of Klagenfurt. Institute of Networked and Embedded Systems, University of Klagenfurt, Klagenfurt, Austria Christoph Klemenjak & Wilfried Elmenreich School of Engineering Science, Simon Fraser University, BC V5A 1S6, Canada Stephen Makonin Christoph Klemenjak Wilfried Elmenreich CK designed and conducted all studies presented in this paper and wrote main parts of this manuscript. SM supervised the topic during CK's visit at Simon Fraser University. WE contributed to discussions and text, and supervised the topic at University of Klagenfurt. All authors read, edited, and approved the manuscript. Correspondence to Christoph Klemenjak. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Klemenjak, C., Makonin, S. & Elmenreich, W. Investigating the performance gap between testing on real and denoised aggregates in non-intrusive load monitoring. Energy Inform 4, 3 (2021). https://doi.org/10.1186/s42162-021-00137-9 Load disaggregation Denoised testing Energy datasets Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
BMC Infectious Diseases Risk of cardiovascular events from current, recent, and cumulative exposure to abacavir among persons living with HIV who were receiving antiretroviral therapy in the United States: a cohort study Kunchok Dorjee ORCID: orcid.org/0000-0003-0992-06311,4, Sanjiv M. Baxi1,2, Arthur L. Reingold1 & Alan Hubbard1,3 BMC Infectious Diseases volume 17, Article number: 708 (2017) Cite this article There is ongoing controversy regarding abacavir use in the treatment of HIV infection and the risk of subsequent development of cardiovascular disease. It is unclear how the risk varies as exposure accumulates. Using an administrative health-plan dataset, risk of cardiovascular disease events (CVDe), defined as the first episode of an acute myocardial infarction or a coronary intervention procedure, associated with abacavir exposure was assessed among HIV-infected individuals receiving antiretroviral therapy across the U.S. from October 2009 through December 2014. The data were longitudinal, and analyzed using marginal structural models. Over 114,470 person-years (n = 72,733) of ART exposure, 714 CVDe occurred at an incidence rate (IR) (95% CI) of 6·23 (5·80, 6·71)/1000 person-years. Individuals exposed to abacavir had a higher IR of CVDe of 9·74 (8·24, 11·52)/1000 person-years as compared to 5·75 (5·30, 6·24)/1000 person-years for those exposed to other antiretroviral agents. The hazard (HR; 95% CI) of CVDe was increased for current (1·43; 1·18, 1·73), recent (1·41; 1·16, 1·70), and cumulative [(1·18; 1·06, 1·31) per year] exposure to abacavir. The risk for cumulative exposure followed a bell-shaped dose-response curve peaking at 24-months of exposure. Risk was similarly elevated among participants free of pre-existing heart disease or history of illicit substance use at baseline. Current, recent, and cumulative use of abacavir was associated with an increased risk of CVDe. The findings were consistent irrespective of underlying cardiovascular risk factors. Cardiovascular disease (CVD) accounts for approximately 16% of deaths among persons living with HIV (PLWH) [1]. Risk factors for CVD are more prevalent among PLWH [2], and use of various antiretroviral (ARV) drugs has been shown to be associated with an increased risk of CVD [3]. With rapid expansion of antiretroviral therapy (ART) coverage both domestically and abroad, researchers and clinicians have become increasingly aware of potential ARV drug-related adverse events. Whether the commonly used ARV drug abacavir is associated with an increased risk of CVD has been intensely debated. Abacavir sulfate is a guanosine analog nucleoside reverse transcriptase inhibitor that possesses retroviral suppressive properties similar to tenofovir [4], and is a commonly prescribed backbone ARV agent. However, the writing of prescriptions of abacavir declined after the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study group reported in 2008 an increased risk of acute myocardial infarction (AMI) among PLWH exposed to abacavir [5,6,7]. Independent investigations that were subsequently carried out have both supported [7,8,9,10,11,12,13,14,15,16,17] and refuted [18,19,20,21,22,23] the D:A:D study group's findings. While studies conducted more recently have generally suggested an increased risk of CVD from abacavir exposure [8, 10, 14, 17], they were limited by few outcomes, with results occasionally underpowered [8, 17]. Failure to identify a clear underlying biological mechanism to explain the epidemiologic findings has added to the deliberation [24]. Furthermore, there has also been a lack of consensus regarding whether the risk of CVD from exposure to abacavir reverses within a few months of stopping the drug [5, 16] and a lack of understanding on how the risk varies as exposure accumulates. In this study, we have sought to address these limitations by investigating the risk of CVD events (CVDe) from current, recent, and cumulative exposure to abacavir among PLWH using conventional and causal statistical methods. Study design, sample collection and participants The risk of CVDe was assessed among PLWH who started ARV drugs in the U.S. between October 1, 2009 and December 31, 2014. Data were obtained from medical and prescription claims data included in the IMS' PharMetrics Plus database. October 1, 2009 was the earliest possible date for complete availability of relevant data; ART prescription history prior to this date was not available. PharMetrics Plus is a large health plan insurance claims database in the U.S., and is comprised of adjudicated claims for more than 150 million unique enrollees from across four regions of the U.S. [25]. The data undergo a series of quality checks to minimize errors. This study used a pre-defined algorithm (Fig. 1) to extract and define the study population of PLWH exposed to any ART in the database. The study population was restricted to those ≥18 years of age. The baseline time point was defined as the date of ART initiation in the database and individual follow up time was censored at the first of three events after baseline: 1) first occurrence of CVDe, 2) last recorded date of ART receipt, 3) December 31, 2014. The study was approved by the Committee for Protection of Human Subjects at the University of California, Berkeley. Algorithm for defining the study cohort from the IMS' PharMetrics Plus claims database. GPI: generic product identifier; CPT: current procedural terminology; ICD-9-CM: International Classification of Disease, 9th Revision, Clinical Modification. aAdditional filter (age ≥ 18) applied to obtain final cohort Exposure, covariate, and outcome definitions Exposures to specific ARV agents were identified by generic product identifier (GPI) codes. Person-time of exposure to abacavir was compared to exposure to ARV agents other than abacavir. Any two prescriptions for an ARV agent separated by <30 days were combined to represent a single continuous exposure; gaps ≥30 days were not combined and this person-time was not included in the analysis. These data are longitudinal, and each subject's follow up time was divided into consecutive one-month periods during which treatment was allowed to vary. The values of covariates were updated at the start of each month. The outcome of CVDe for an individual was defined as the first occurrence of an AMI or receipt of a coronary artery intervention procedure (i.e. percutaneous coronary intervention or coronary artery bypass graft) after baseline. AMI and coronary artery intervention procedures were ascertained using the International Classification of Disease, 9th Revision, Clinical Modification (ICD-9-CM) or Current Procedural Terminology (CPT) codes, respectively (Additional file 1: Table S1). The ICD-9 code used for AMI (410.xx) has been previously validated in another claims database [26]. The temporal ordering of covariate, treatment, and outcome allowed for a time-varying analysis, and the opportunity for causal interpretations. The first observation of a time-dependent covariate corresponded to its baseline value and once a health condition developed, an individual was assumed to have the condition for the remainder of the study. Current exposure to abacavir was defined as exposure (yes/no) during each one-month observation period. Recent exposure was defined as exposure (yes/no) in the previous six months (inclusive of the current month). Cumulative exposure was defined as the total duration of exposure an individual had received at a particular time point in one-month increments, updated monthly. Duration of exposure ceased to accumulate upon discontinuation of the drug but resumed if the drug was restarted. HIV-infection status and covariates were ascertained using the ICD-9-CM or CPT codes (Additional file 1: Table S1). The risk of CVDe from a current, recent, and cumulative exposure to abacavir was estimated by the parameters of pooled logistic regression marginal structural models using stabilized inverse probability of treatment weights (sIPTW) [27]. The sIPTW was generated from four treatment models – two each for the numerator and the denominator of the weight [16]. For the denominator, the time point specific probability of exposure initiation was first estimated by fitting a main term pooled logistic regression to data up to the individual's first month of receiving the exposure or end of follow up for those who were never exposed. The probability of exposure continuation was then estimated by fitting the model to data after the first month of starting the exposure. The denominator was modelled as a function of baseline covariates: gender, tobacco use/smoking (ever), substance or alcohol abuse (ever), serologic evidence of hepatitis B and C infections, history of stroke, cancer or old myocardial infarction, and time-dependent covariates: age, year of ART initiation, body weight, receipt of hypoglycemic agents (i.e. sulfonylureas, biguanides, insulin, thiazolidinedione) or medications for CVDe (i.e. aspirin, beta-blocker, angiotensin converting enzyme inhibitor, angiotensin receptor blocker, calcium channel blocker, statins) or diagnoses of: chronic kidney disease (CKD), dyslipidemia, heart failure, cardiac dysrhythmia, atherosclerosis, diabetes mellitus, and hypertension. The exposure continuation model additionally contained a variable for past month's exposure status. The probabilities for the numerator of the sIPTW were similarly modelled but as a function of baseline covariates only. The follow-up time was modeled as a function of natural cubic splines with three internal knots placed at 25th, 50th and 75th percentiles. The marginal structural model was adjusted for the sIPTW and the baseline covariates. Same treatment weights were used for estimation of CVDe risk from current, recent, and cumulative exposure to abacavir. In order to assess the change in risk over time, the adjusted and marginal models were fit as a function of categories of cumulative exposure, i.e., never exposed, 1–6, 7–12, 13–18, 19–24, and >25 months of exposure. In sensitivity analyses, the study population was restricted to individuals free of CVD at baseline, and to individuals without a history of alcohol and substance abuse at baseline. Sensitivity analyses were additionally carried out to assess if the risk of CVDe from abacavir exposure differed after adjusting for other anti-retroviral agents. Using the same sIPTW models, we tested for interaction to see whether risk of CVDe from current abacavir exposure is modified in the presence of 13 different risk factors (Additional file 1: Table S5). In addition to the marginal structural results, corresponding results from unadjusted and adjusted Cox models were calculated. This study assumes uninformative censoring. Data were extracted and processed from the main claims databases using TERADATA (Dayton, OH), SAS version 9.1 (SAS Institute, Cary, NC), and STATA version 13.1 (StataCorp, College Station, TX). The marginal structural models were implemented in STATA version 13.1, based on Fewell et al. [28]. The rationale, definition, and implementation of the marginal structural models are described in Additional file 1: Appendix 1. Study population and incidence rates There were 72,733 participants contributing 114,470 person-years of exposure to antiretroviral agents. On average, participants were exposed to ART for 1.5 years. The mean age of the study population was 46 years and 82% were males. The characteristics of the study population at baseline and summary of exposure to various antiretroviral drugs are described in Tables 1, 2 respectively. Overall, 714 CVDe occurred at an incidence rate of 6.23 (95% CI: 5.80, 6.71)/1000 person-years. Of the 714 outcomes, 137 were observed over 14,060 person-years of current exposure to abacavir at an incidence rate of 9.74 (95% CI: 8.24, 11.52)/1000 person-years, as compared to 577 outcomes over 100,410 person-years with an incidence rate of 5.75 (95% CI: 5.30, 6.24)/1000 person-years for those currently exposed to other ARV drugs. The incidence rate was highest for those exposed to abacavir between 13 and 18 months (11.32/1000 person-years) (Table 3). Of the 714 CVDe, 548 were cases of AMI. The overall incidence rate of AMI was 4.78 (95% CI: 4.39, 5.19)/1000 person-years (Additional file 1: Table S2). We calculated a population attributable risk (PAR) associated with abacavir exposure as: $$ {\displaystyle \begin{array}{l}{\left(\frac{\mathrm{Risk}\ \mathrm{of}\ \mathrm{CVDe}\ \mathrm{in}\ \mathrm{Total}\ \mathrm{Population}\hbox{-} \mathrm{Risk}\ \mathrm{of}\ \mathrm{CVDe}\ \mathrm{in}\ \mathrm{Unexposed}\ \mathrm{Population}}{\mathrm{Risk}\ \mathrm{of}\ \mathrm{CVDe}\ \mathrm{in}\ \mathrm{Total}\ \mathrm{Population}}\right)}^{\ast }100\\ {}={\left(\frac{\left(6.23/1000\right)\hbox{-} 5.75/1000}{\left(6.23/1000\right)}\right)}^{\ast }100=8\%.\end{array}} $$ Table 1 Baseline characteristics of persons living with HIV in the US receiving antiretroviral agents, stratified by exposure to abacavir, in the IMS PharMetric Plus claims database from October 1, 2009 through December 31, 2014 Table 2 Summary of exposure to various antiretroviral drugs among people living with HIV in the US in the IMS Pharmetrics Plus Claims database stratified by regimens containing and not containing abacavir from October 1, 2009 through December 31, 2014 Table 3 Incidence rate (IR) of cardiovascular disease eventsa (CVDe) among persons living with HIV exposed to abacavir for various durations Factors associated with Abacavir use At baseline, abacavir recipients had a higher prevalence of essential hypertension, diabetes mellitus, chronic kidney disease (CKD), dyslipidemia, lipodystrophy, heart disease, and use of cardiovascular medications (Table 1). In the pooled logistic regression model, older age, a diagnosis of CKD, symptomatic HIV infection, and presence of lipodystrophy were associated with an increased probability of receiving abacavir (Additional file 1: Table S3). Predictors of outcome The sIPTW models showed the risk of CVDe (HR; 95% CI) was increased for current (1.43; 1.18, 1.73), recent (1.41; 1.16, 1.70) and cumulative (1.18; 1.06, 1.31) exposure (per year) to abacavir (Table 4). Separate models were run for each of current, recent, and cumulative exposure. The unadjusted and adjusted Cox models also showed increased risk for these exposures (Table 4). On further assessment of the risk from cumulative exposure, the HR varied with the duration of exposure in an inverted U-shaped pattern (Table 5 and Fig. 2 ); the relative hazard continued to increase up to 24 months of exposure, after which it decreased to non-significant levels but remained elevated compared to those never exposed to abacavir. Older age, male sex, tobacco use, other heart diseases, prior AMI, use of CVD-related medications, diabetes mellitus, and dyslipidemia were each associated with increased hazard of CVDe in the adjusted Cox model (Additional file 1: Table S4). We also assessed whether the risk was reversible after six months of stopping abacavir by comparing those with any abacavir exposure prior to but not in the last six months including the current month to those never exposed and found that the risk (HR; 95% CI) remained elevated (sIPTW model: 1.69; 0.89, 3.20; adjusted Cox model: 2.08; 1.17, 3.71). In tests of interactions, we observed that the risk of CVDe associated with abacavir use was more pronounced for age < 45 years (interaction p-value: 0.028) and for people without prior heart disease (interaction p-value: 0.016) (Additional file 1: Table S5). Table 4 Risk of cardiovascular disease events associated with current, recent, and cumulative exposure to abacavir among persons living with HIV, in the IMS PharMetric Plus claims database from October 1, 2009 through December 31, 2014 Table 5 Risk of cardiovascular disease among HIV-infected individuals exposed to abacavir for various durations Risk of cardiovascular disease events associated with increasing durations of exposure to abacavir as compared to those never exposed. See Table 3 and S4 table for covariate adjustment In a sensitivity analysis, we observed a 53% higher risk of CVDe (sIPTW model) for current exposure to abacavir among individuals without a prior AMI or heart disease at baseline (Additional file 1: Table S6). This relationship was also assessed by excluding other heart diseases (heart failure, cardiac arrhythmia, atherosclerosis, or receipt of cardiovascular medications) from the adjustment set of covariates for both the marginal and the adjusted Cox model, with similar results. The risk also remained elevated by 41% when the study population was restricted to individuals not using illicit substances or alcohol at baseline (Additional file 1: Table S6). We further tested for CVDe risk from abacavir use after adjusting for cumulative exposure to other antiretroviral agents (tenofovir, emtricitabine, zidovudine, lamivudine, lopinavir, atazanavir, darunavir, efavirenz, nevirapine, rilpivirine, and raltegravir) using sIPTW models and found elevated risk (HR; 95% CI) for current (1.38; 1.12, 1.68), recent (1.34; 1.09, 1.64), and cumulative exposure (1.16; 1.03, 1.31). We then replicated the D:A:D model [5] for cumulative exposure by including recent exposure in the same model as cumulative exposure and observed that although our hazard ratio estimate for risk from cumulative exposure (per year) remained elevated (HR:1.08; 95% CI: 0.89, 1.30), it decreased to a non-statistically significant level. When we modelled the risk by partitioning the cumulative exposure into various durations, we observed a similar increased risk [HR (95% CI)] pattern as observed in our primary analysis (Table 4): 1–6 months: 1.91 (0.95–3.83); 7–12 months: 2.58 (1.16–5.71); 13–18 months: 2.68 (1.17–6.11); 19–24 months: 2.90 (1.37–6.17); and ≥25 months: 2.13 (0.93–4.88). In a large database claims-based study, we found an increased risk of CVDe associated with exposure to current, recent, and cumulative exposure to abacavir using both adjusted Cox and marginal structural models estimated with inverse probability treatment weights. The overall incidence rate of AMI in this study was 4.78/1000 person-years, which compares to 3.3/1000 person-years in the 2008 D:A:D study. AMI incidences of 1.41/1000 people and 1.2/1000 people were seen in the general population in Olmstead county in Minnesota in 2006 and in men 35–65 years of age in the Framingham study population, respectively [29, 30]. This relatively higher incidence of AMI in the PLWH could be due to HIV infection [31,32,33], ART use [3], or both; PLWH have been shown to have more risk factors for CVD as compared to the general population [31,32,33]. The incidence rates of AMI associated with exposure to abacavir in this study (6.9/1000 person-years) and in the D:A:D study (6.1/1000 person-years) were ~4–5 fold higher than the general population estimates and approximately 2-fold higher than in the general population of PLWH [2, 31,32,33,34]. Some of the difference in results between this study and the D:A:D study including higher incidence rate of AMI in this study could be because participants in this study were all exposed to ART whereas the D:A:D study included individuals who had not yet started ART, as well as those who had discontinued ART totally. We calculated a population attributable risk of 8%. This means 8% (n = 57) of the total CVDe risk in the PLWH could be prevented if abacavir was not used, assuming a causal relationship between abacavir use and CVDe risk. In an attempt to characterize an underlying biological mechanism for the increase in CVDe risk associated with abacavir use, we assessed how the risk varied with duration of exposure. The relative hazard of AMI increased with increasing duration of exposure in an inverted U-shaped pattern, peaking between 13 and 24 months of exposure and leveling off thereafter, suggesting a dose response relationship between cumulative time exposed to abacavir and risk, up to 24 months. This result agrees with earlier finding by Young et al. in which they first showed that the risk of CVD increased with increasing duration of exposure, with greatest risk between 6 and 36 months and exposure beyond 36 months adding little to the risk, suggesting a dose-response pattern. We observed the dose-response relationship for various durations of cumulative exposure after controlling for recent exposure as well in addition to other variables in the model; the D:A:D study group [5] had reported that the observed risk for cumulative exposure disappeared after adjusting for recent exposure, meaning that the CVD risk existed only up to first 6 months of exposure, after which the risk reversed. In a separate model, we tested the risk reversibility and found a 69% increased risk of CVDe among those who had stopped abacavir prior to last six months, suggesting a risk not reversible within six months of stopping the drug. We did not formally test whether the inverted U-shaped curve described for cumulative exposure provides a better fit to the observed risk estimates than a simple linear association. Whereas this and Young et al.'s study results do not support an underlying mechanism related to immediate exposure to abacavir, the results are not consistent with an atherogenic mechanism, in which an ongoing or increasing risk would be expected with an increasing duration of exposure, without necessarily reaching a peak effect and leveling off after 24 months. The finding of an early peak in the increased risk of AMI in the course of abacavir treatment is helpful in understanding how risk may change with continuing versus changing therapy. The study results presented here suggest a reversible but more gradual underlying mechanism with a longer lasting impact that regresses more slowly after removal of the exposure. Prior work has suggested that abacavir-induced platelet hyper-reactivity and aggregation could potentially lead to thrombosis and myocardial infarction [35,36,37]. Specifically, abacavir may induce platelet hyper-reactivity by competitive inhibition of a nitric oxide-induced soluble guanylyl cyclase via its active metabolite, carbovir-triphosphate, leading to a decreased production of cyclic guanosine monophosphate, an inhibitor of platelet aggregation and secretion [24, 35, 36, 38]. It is possible that abacavir may trigger an acute platelet response leading to endothelial injury with a longer lasting impact. It is also unclear whether abacavir may exert its effect on CVD risk through an increase in inflammatory biomarkers. While the SMART/INSIGHT study investigators [15], Kristoffersen et al. [39], and Hileman et al. [40] showed evidence for a possible role of inflammatory biomarkers in causing CVD among abacavir users [e.g. increased levels of high sensitivity c-reactive protein (hsCRP) and interleukin-6], several other studies have shown that levels of inflammatory biomarkers such as hsCRP, interleukin-6, selectin P and E, D-dimer, vascular adhesion molecule-1, intercellular adhesion molecule-1, and tumor necrosis factor alpha are not elevated after exposure to abacavir [41,42,43,44,45,46,47,48,49,50,51,52,53,54]. Future interdisciplinary studies may explore these areas by bridging basic, translational and clinical science to provide additional insights into the mechanisms underlying abacavir-associated cardiovascular risk. We have not established a clear reason for observing a higher risk of CVDe associated with abacavir use among the younger age-group and individuals without a pre-existing cardiac condition in the test of interactions. While we acknowledge the exploratory nature of the analyses for interaction testing with the possibility that the results could be due to chance, the observation of a higher CVDe risk in individuals without prior heart disease may stand to support the finding of an increased risk in younger age people. The increased CVDe risk in younger age people could also reflect a higher prevalence of cocaine and injection drug use among them [19]. It would be important to test in other populations whether CVD risk associated with abacavir use differs by age. We used the sIPTW approach because individuals with certain risk factors for CVD such as CKD, hypertension, diabetes mellitus, and dyslipidemia, may be preferentially channeled into (or away from) receiving abacavir based on its known toxicity in the presence of these conditions. The sIPTW approach may also be necessary because post-baseline values of these variables may simultaneously serve as confounders and causal intermediates; adjusting for these through traditional methods can lead to biased results [55]. Under such settings, the use of inverse probability weights provides a valuable tool for balancing confounders across exposure groups without conditioning on variables affected by treatment [55, 56]. Some of our results for various durations of cumulative exposure appreciably differed between conventional Cox models and marginal structural models. For example, the hazard ratios (95% CI, p value) for 7–12 months, 13–18 months, and 19–24 months of cumulative exposure were 1.27 (0.87–1.86; p = 0.219), 1.71 (1.10, 2.65; p = 0.016), and 1.62 (0.98, 2.69; p = 0.060), respectively, in adjusted Cox models. The corresponding hazard ratios from marginal structural models were 1.41 (0.97, 2.06; p = 0.073), 1.78 (1.16, 2.72; p = 0.009), and 1.90 (1.16, 3.11; p = 0.011) (Table 4). A key strength of this study is the application of conventional and robust methods to address key study questions while using a very large U.S. health-plan dataset containing longitudinal information on usage of ART in >70,000 PLWH receiving care across the U.S. The recency of the data is an asset. Most studies that showed an association between abacavir use and CVD risk so far were hospital based [5, 9, 10, 14, 16,17,18,19] and hence may be subject to similar bias, such as channeling bias, that could arise from specific prescription behavior of physicians. Therefore, reproduction of the results in another representative population, such as that enrolled in the claims database, would be relevant and important. The similarity of these results to those from prior studies, the reproducibility of the results in the sensitivity analyses, and the finding of a background incidence rate of AMI comparable to that found in other studies are reassuring. A limitation of the study is that the ICD-9 and CPT diagnostic codes used may be prone to coding errors; however, such errors are likely to affect the exposure groups non-differentially and may not bias the study results. It is possible that information on covariates, such as body-weight, for which re-imbursement may not be sought could be under-reported in the database. Again, we expect this problem to exist non-differentially across exposure groups. This is an observational cohort study and is therefore subject to confounding from unmeasured factors and possible channeling bias; we have attempted to account for the latter by adopting an sIPTW-based analytic approach. Covariates that could be relevant but not available in the claims database and hence missing in our study are race/ethnicity, CD4 cell count, and HIV viral load. Adjustment for CD4 cell count and HIV viral load made little difference to the relative rate of AMI in a prior study [5]. There is potential for bias in the study results from residual confounding that may arise from the binary categorization of certain variables in the study, rather than having a graded continuous response. We assumed uninformative censoring for the study because participants in both the exposure groups, i.e., PLWH receiving abacavir based ART regimen and PLWH receiving non-abacavir based ART regimen, may be at similar risk of adverse HIV–related life events that may cease their continued enrollment into the health-plan and hence representation in the database. We chose AMI and/or coronary artery interventions only to define CVDe so as to be as specific as possible with the study outcome's representation of ischemic CVD; however, we might use a broader definition including other cardiac conditions or cerebrovascular events for the study outcome. In summary, exposure to abacavir is associated with an increased risk of CVDe. We recommend a careful consideration of the risks and benefits of abacavir treatment while formulating antiretroviral treatment regimens with patients. AMI: ARV: Antiretroviral CKD: CPT: Current Procedural Terminology CVD: CVDe: Cardiovascular Disease Events D:A:D: Data Collection on Adverse Events of Anti-HIV Drugs HIV: Human Immune Deficiency Virus hsCRP: High Sensitivity C-Reactive Protein ICD-9-CM: International Classification of Disease, 9th Revision, Clinical Modification PLWHIV: People Living With HIV sIPTW: Stabilized Inverse Probability Treatment Weights Antiretroviral Therapy Cohort C. Causes of death in HIV-1-infected patients treated with antiretroviral therapy, 1996-2006: collaborative analysis of 13 HIV cohort studies. Clin Infect Dis. 2010;50(10):1387–96. Triant VA, Lee H, Hadigan C, Grinspoon SK. Increased acute myocardial infarction rates and cardiovascular risk factors among patients with human immunodeficiency virus disease. J Clin Endocrinol Metab. 2007;92(7):2506–12. Bavinger C, Bendavid E, Niehaus K, Olshen RA, Olkin I, Sundaram V, Wein N, Holodniy M, Hou N, Owens DK, et al. Risk of cardiovascular disease from antiretroviral therapy for HIV: a systematic review. PLoS One. 2013;8(3) Cruciani M, Mengoli C, Malena M, Serpelloni G, Parisi SG, Moyle G, Bosco O. Virological efficacy of abacavir: systematic review and meta-analysis. J Antimicrob Chemother. 2014;69(12):3169–80. Sabin CA, Worm SW, Weber R, Reiss P, El-Sadr W, Dabis F, De Wit S, Law M, Monforte AD, Friis-Moller N, et al. Use of nucleoside reverse transcriptase inhibitors and risk of myocardial infarction in HIV-infected patients enrolled in the D : a : D study: a multi-cohort collaboration. Lancet. 2008;371(9622):1417–26. Antoniou T, Gillis J, Loutfy MR, Cooper C, Hogg RS, Klein MB, Machouf N, Montaner JSG, Rourke SB, Tsoukas C, et al. Impact of the data collection on adverse events of anti-HIV drugs cohort study on abacavir prescription among treatment-naive, HIV-infected patients in Canada. J Int Assoc Providers AIDS Care. 2014;13(2):153–9. Sabin CA, Reiss P, Ryom L, Phillips AN, Weber R, Law M, Fontas E, Mocroft A, de Wit S, Smith C, et al. Is there continued evidence for an association between abacavir usage and myocardial infarction risk in individuals with HIV? A cohort collaboration. BMC Med. 2016;14:61. Brouwer ES, Napravnik S, Eron Jr JJ, Stalzer B, Floris-Moore M, Simpson Jr RJ, Stürmer T. Effects of combination antiretroviral therapies on the risk of myocardial infarction among HIV patients. Epidemiology. 2014;25(3):406–17. Choi AI, Vittinghoff E, Deeks SG, Weekley CC, Li YM, Shlipak MG. Cardiovascular risks associated with abacavir and tenofovir exposure in HIV-infected persons. AIDS. 2011;25(10):1289–98. Desai M, Joyce V, Bendavid E, Olshen RA, Hlatky M, Chow A, Holodniy M, Barnett P, Owens DK. Risk of cardiovascular events associated with current exposure to HIV antiretroviral therapies in a US veteran population. Clin Infect Dis. 2015;61(3):445–52. Durand M, Sheehy O, Lelorier J, Tremblay CL. Association between use of antiretroviral therapy and risk of acute myocardial infarction: a nested case control study using Quebec's public health insurance database (RAMQ). J Popul Ther Clin Pharmacol. 2011;18(2):e178–9. Martin A, Bloch M, Amin J, Baker D, Cooper DA, Emery S, Carr A, Grp SS. Simplification of antiretroviral therapy with Tenofovir-Emtricitabine or Abacavir-lamivudine: a randomized, 96-week trial. Clin Infect Dis. 2009;49(10):1591–601. Obel N, Farkas DK, Kronborg G, Larsen CS, Pedersen G, Riis A, Pedersen C, Gerstoft J, Sørensen HT. Abacavir and risk of myocardial infarction in HIV-infected patients on highly active antiretroviral therapy: a population-based nationwide cohort study. HIV Med. 2010;11(2):130–6. Palella FJ Jr, Althoff K, Moore R, Zhang J, Kitahata M, Gange S, Crane H, Drozd D, Brooks J, Elion R. Abacavir use and risk for myocardial infarction in the NA-ACCORD. In: CROI 2015: February 23–26, 2015 2015; Seattle; 2015. Strategies for Management of Anti-Retroviral Therapy I, Groups DADS. Use of nucleoside reverse transcriptase inhibitors and risk of myocardial infarction in HIV-infected patients. AIDS (London). England. 2008;22(14):F17–24. Young J, Xiao Y, Moodie EEM, Abrahamowicz M, Klein MB, Bernasconi E, Schmid P, Calmy A, Cavassini M, Cusini A, et al. Effect of cumulating exposure to Abacavir on the risk of cardiovascular disease events in patients from the Swiss HIV cohort study. J Acquir Immune Defic Syndr. 2015;69(4):413–21. Marcus JL, Neugebauer RS, Leyden WA, Chao CR, Xu L, Quesenberry CP Jr, Klein DB, Towner WJ, Horberg MA, Silverberg MJ. Use of Abacavir and risk of cardiovascular disease among HIV-infected individuals. J Acquir Immune Defic Syndr. 2016;71(4):413–9. Bedimo RJ, Westfall AO, Drechsler H, Vidiella G, Tebas P. Abacavir use and risk of acute myocardial infarction and cerebrovascular events in the highly active antiretroviral therapy era. Clin Infect Dis. 2011;53(1):84–91. Lang S, Mary-Krause M, Cotte L, Gilquin J, Partisani M, Simon A, Boccara F, Costagliola D. Impact of individual antiretroviral drugs on the risk of myocardial infarction in human immunodeficiency virus-infected patients: a case-control study nested within the French hospital database on HIV ANRS cohort CO4. Arch Intern Med. 2010;170(14):1228–38. Cruciani M, Zanichelli V, Serpelloni G, Bosco O, Malena M, Mazzi R, Mengoli C, Parisi SG, Moyle G. Abacavir use and cardiovascular disease events: a meta-analysis of published and unpublished data. AIDS. 2011;25(16):1993–2004. Ding X, Andraca-Carrera E, Cooper C, Miele P, Kornegay C, Soukup M, Marcus KA. No association of abacavir use with myocardial infarction: findings of an FDA meta-analysis. J Acquir Immune Defic Syndr. 2012;61(4):441–7. Ribaudo HJ, Benson CA, Zheng Y, Koletar SL, Collier AC, Lok JJ, Smurzynski M, Bosch RJ, Bastow B, Schouten JT. No risk of myocardial infarction associated with initial antiretroviral treatment containing abacavir: short and long-term results from ACTG A5001/ALLRT. Clin Infect Dis. 2011;52(7):929–40. Brothers CH, Hernandez JE, Cutrell AG, Curtis L, Ait-Khaled M, Bowlin SJ, Hughes SH, Yeo JM, Lapierre DH. Risk of myocardial infarction and Abacavir therapy: no increased risk across 52 GlaxoSmithKline-sponsored clinical trials in adult subjects. J Acquir Immune Defic Syndr. 2009;51(1):20–8. Gresele P, Falcinelli E, Momi S, Francisci D, Baldelli F. Highly active antiretroviral therapy-related mechanisms of endothelial and platelet function alterations. Rev Cardiovasc Med. 2014;15(Suppl 1):S9–20. IMS' PharMetrics Plus Data Dictionary [http://www.imshealth.com/en/thought-leadership/quintilesims-institute/research-support/research-support%E2%80%93data-and-information]. Kiyota Y, Schneeweiss S, Glynn RJ, Cannuscio CC, Avorn J, Solomon DH. Accuracy of Medicare claims-based diagnosis of acute myocardial infarction: estimating positive predictive value on the basis of review of hospital records. Am Heart J. 2004;148(1):99–104. Cole SR, Hernan MA. Constructing inverse probability weights for marginal structural models. Am J Epidemiol. 2008;168(6):656–64. Fewell Z, Hernan MA, Wolfe F, Tilling K, Choi H, Sterne JAC. Controlling for time-dependent confounding using marginal structural models. Stata J. 2004;4(4):402–20. Parikh NI, Gona P, Larson MG, Fox CS, Benjamin EJ, Murabito JM, O'Donnell CJ, Vasan RS, Levy D. Long-term trends in myocardial infarction incidence and case fatality in the National Heart, Lung, and Blood Institute's Framingham heart study. Circulation. 2009;119(9):1203–10. Roger VL, Weston SA, Gerber Y, Killian JM, Dunlay SM, Jaffe AS, Bell MR, Kors J, Yawn BP, Jacobsen SJ. Trends in incidence, severity, and outcome of hospitalized myocardial infarction. Circulation. 2010;121(7):863–9. Currier JS, Taylor A, Boyd F, Dezii CM, Kawabata H, Burtcel B, Maa JF, Hodder S. Coronary heart disease in HIV-infected individuals. J Acquir Immune Defic Syndr. 2003;33(4):506–12. Data Collection on Adverse Events of Anti HIVdSG, Smith C, Sabin CA, Lundgren JD, Thiebaut R, Weber R, Law M, Monforte A, Kirk O, Friis-Moller N, et al. Factors associated with specific causes of death amongst HIV-positive individuals in the D:a:D study. AIDS. 2010;24(10):1537–48. Worm SW, De Wit S, Weber R, Sabin CA, Reiss P, El-Sadr W, Monforte AD, Kirk O, Fontas E, Dabis F, et al. Diabetes mellitus, preexisting coronary heart disease, and the risk of subsequent coronary heart disease events in patients infected with human immunodeficiency virus: the data collection on adverse events of anti-HIV drugs (D:a:D study). Circulation. 2009;119(6):805–11. Freiberg MS, Chang CC, Kuller LH, Skanderson M, Lowy E, Kraemer KL, Butt AA, Bidwell Goetz M, Leaf D, Oursler KA, et al. HIV infection and the risk of acute myocardial infarction. JAMA Intern Med. 2013;173(8):614–22. Baum PD, Sullam PM, Stoddart CA, McCune JM. Abacavir increases platelet reactivity via competitive inhibition of soluble guanylyl cyclase. AIDS (London, England). 2011;25(18):2243–8. Falcinelli E, Francisci D, Belfiori B, Petito E, Guglielmini G, Malincarne L, Mezzasoma A, Sebastiano M, Conti V, Giannini S, et al. Vivo platelet activation and platelet hyperreactivity in abacavir-treated HIV-infected patients. Thromb Haemost. 2013;110(2):349–57. Satchell CS, O'Halloran JA, Cotter AG, Peace AJ, O'Connor EF, Tedesco AF, Feeney ER, Lambert JS, Sheehan GJ, Kenny D, et al. Increased platelet reactivity in HIV-1-infected patients receiving Abacavir-containing antiretroviral therapy. J Infect Dis. 2011;204(8):1202–10. Gresele P, Francisci D, Falcinelli E, Belfiori B, Petito E, Fierro T, Mezzasoma AM, Baldelli F. Possible role of platelet activation in the cardiovascular complications associated with HIV infection: differential effects of abacavir (ABC) vs. tenofovir (TDF). J Thromb Haemost. 2011;9:799. Kristoffersen US, Kofoed K, Kronborg G, Benfield T, Kjaer A, Lebech AM. Changes in biomarkers of cardiovascular risk after a switch to abacavir in HIV-1-infected individuals receiving combination antiretroviral therapy. HIV Med. 2009;10(10):627–33. Hileman CO, Wohl DA, Tisch DJ, Debanne SM, McComsey GA. Short communication initiation of an Abacavir-containing regimen in HIV-infected adults is associated with a smaller decrease in inflammation and endothelial activation markers compared to non-Abacavir-containing regimens. AIDS Res Hum Retrovir. 2012;28(12):1561–4. Young B, Squires KE, Ross LL, Santiago L, Sloan LM, Zhao HH, Wine BC, Pakes GE, Margolis DA, Shaefer MS, et al. Inflammatory biomarker changes and their correlation with Framingham cardiovascular risk and lipid changes in antiretroviral-naive HIV-infected patients treated for 144 weeks with Abacavir/lamivudine/Atazanavir with or without ritonavir in ARIES. AIDS Res Hum Retrovir. 2013;29(2):350–8. Martinez E, Larrousse M, Podzamczer D, Perez I, Gutierrez F, Lonca M, Barragan P, Deulofeu R, Casamitjana R, Mallolas J, et al. Abacavir-based therapy does not affect biological mechanisms associated with cardiovascular dysfunction. AIDS. 2010;24(3):F1–9. Palella FJ, Gange SJ, Benning L, Jacobson L, Kaplan RC, Landay AL, Tracy RP, Elion R. Inflammatory biomarkers and abacavir use in the Women's interagency HIV study and the multicenter AIDS cohort study. AIDS. 2010;24(11):1657–65. Padilla S, Masia M, Garcia N, Jarrin I, Tormo C, Gutierrez F. Early changes in inflammatory and pro-thrombotic biomarkers in patients initiating antiretroviral therapy with abacavir or tenofovir. BMC Infect Dis. 2011;11 De Luca A, Donati KD, Cozzi-Lepri A, Colafigli M, De Curtis A, Capobianchi MR, Antinori A, Giacometti A, Magnani G, Vullo V, et al. Exposure to Abacavir and biomarkers of cardiovascular disease in HIV-1-infected patients on suppressive antiretroviral therapy: a longitudinal study. Jaids-Journal of Acquired Immune Deficiency Syndromes. 2012;60(3):E98–E101. Kim C, Gupta SK, Green L, Taylor BM, Deuter-Reinhard M, Desta Z, Clauss M. Abacavir, didanosine and tenofovir do not induce inflammatory, apoptotic or oxidative stress genes in coronary endothelial cells. Antivir Ther. 2011;16(8):1335–9. Patel P, Bush T, Overton T, Baker J, Hammer J, Kojic E, Conley L, Henry K, Brooks JT. Effect of abacavir on acute changes in biomarkers associated with cardiovascular dysfunction. Antivir Ther. 2012;17(4):755–61. Rasmussen TA, Tolstrup M, Melchjorsen J, Frederiksen CA, Nielsen US, Langdahl BL, Ostergaard L, Laursen AL. Evaluation of cardiovascular biomarkers in HIV-infected patients switching to abacavir or tenofovir based therapy. BMC Infect Dis. 2011;11 Wohl DA, Arnoczy G, Fichtenbaum CJ, Campbell T, Taiwo B, Hicks C, McComsey GA, Koletar S, Sax P, Tebas P, et al. Comparison of cardiovascular disease risk markers in HIV-infected patients receiving abacavir and tenofovir: the nucleoside inflammation, coagulation and endothelial function (NICE) study. Antivir Ther. 2014;19(2):141–7. Hammond E, McKinnon E, Mallal S, Nolan D. Longitudinal evaluation of cardiovascular disease-associated biomarkers in relation to abacavir therapy. AIDS. 2008;22(18):2540–3. O'Halloran J, Dunne E, Tinago W, Denieffe S, Kenny D, Mallon P. Effect of switch from Abacavir to Tenofovir DF on platelet function markers: a SWIFT trial substudy. In: CROI 2014: March 3–6, 2014 2014; Boston. 2014:484. Martin A, Amin J, Cooper DA, Carr A, Kelleher AD, Bloch M, Baker D, Woolley I, Emery S. Group Ss: Abacavir does not affect circulating levels of inflammatory or coagulopathic biomarkers in suppressed HIV: a randomized clinical trial. AIDS. 2010;24(17):2657–63. De Luca A, de Gaetano Donati K, Cozzi-Lepri A, Colafigli M, De Curtis A, Capobianchi MR, Antinori A, Giacometti A, Magnani G, Vullo V, et al. Exposure to abacavir and biomarkers of cardiovascular disease in HIV-1-infected patients on suppressive antiretroviral therapy: a longitudinal study. J Acquir Immune Defic Syndr. 2012;60(3):e98–101. Patel P, Bush T, Overton T, Baker J, Hammer J, Kojic E, Conley L, Henry K, Brooks JT, Investigators SUNS. Effect of abacavir on acute changes in biomarkers associated with cardiovascular dysfunction. Antivir Ther. 2012;17(4):755–61. Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology. 2000;11(5):550–60. Petersen ML, van der Laan MJ. Causal models and learning from data: integrating causal modeling and statistical estimation. Epidemiology. 2014;25(3):418–26. We would like to thankfully acknowledge the study participants and the clinic staff at the study centers. The datasets used and /or analyzed during the current study are available from the corresponding author on reasonable request and shall be made available at any time, upon request or suggestion by the editorial board, through an online public repository such as DYRAD https://datadryad.org/pages/organization, etc. Division of Epidemiology, School of Public Health, University of California Berkeley, Hall Berkeley, 101 Haviland, CA, 94720-7358, USA Kunchok Dorjee, Sanjiv M. Baxi, Arthur L. Reingold & Alan Hubbard Department of Medicine, University of California San Francisco, San Francisco, California, USA Sanjiv M. Baxi Division of Biostatistics, School of Public Health, University of California Berkeley, Berkeley, California, USA Alan Hubbard Division of Infectious Diseases, School of Medicine, Johns Hopkins University, Baltimore, MD, USA Kunchok Dorjee Arthur L. Reingold KD had full access to all of the data in the study and takes responsibility for integrity of the data and accuracy of the data analysis. All authors read and approved the final manuscript. Study concept and design: KD, AH. Acquisition, analysis, or interpretation of data: KD, SB, AR, AH. Statistical analysis: KD. Drafting of the manuscript: KD. Critical revision of the manuscript for important intellectual content: KD, SB, AR, AH. Correspondence to Kunchok Dorjee. K.D. is currently a post-doctoral fellow at the Johns Hopkins School of Medicine Infectious Diseases. K.D. is working with Richard Chaisson, MD, on global control of tuberculosis and HIV-AIDS, especially in the pediatric population. K.D. recently completed his Ph.D. degree in Epidemiology from UC-Berkeley School of Public Health under supervision of Arthur Reingold, MD, and Alan Hubbard, PhD, who have overseen his doctoral dissertation work including the work contained in this manuscript. Dr. Reingold, co-author on this paper, is the head of UC-Berkeley School of Public Health Division of Epidemiology and Dr. Hubbard is the head of UC-Berkeley School of Public Health Division of Biostatistics. Dr. Baxi has been an infectious disease fellow at the University of California, San Francisco and a Ph.D. student at UC-Berkeley at the time of this work. The study was approved by the Committee for Protection of Human Subjects at the University of California, Berkeley. Disclosures: K.D. was an intern with the Division of Epidemiology at Gilead Sciences (Foster City, CA, USA), which supported the acquisition of the data. Gilead Sciences was not involved in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. This manuscript is a result of K.D.'s PhD dissertation work at UC-Berkeley. Support: S.M.B. is supported by the UCSF Traineeship in AIDS Prevention Studies (US National Institutes of Health (NIH) T32 MH-19105). The funders were not involved in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. ICD-9-CM and CPT codes for defining various covariates and outcomes. Table S2. Age-specific incidence rate (IR) of acute myocardial infarction (AMI) among persons living with HIV receiving antiretroviral therapy. Table S3. Factors associated with initiation of abacavir among persons living with HIV, by pooled logistic regression. Table S4 The influence of various risk factors on the development of CVD among persons living with HIV receiving anti-retroviral therapy. Table S5 Risk of CVD from current exposure to abacavir in sub-groups of variables at baseline (test of interactions). Table S6 Risk of cardiovascular disease from exposure to abacavir among persons living with HIV free of heart diseasea or substance or alcohol abuse at baseline. Appendix 1. Detailed approach to developing marginal structural models. (DOCX 58 kb) Dorjee, K., Baxi, S.M., Reingold, A.L. et al. Risk of cardiovascular events from current, recent, and cumulative exposure to abacavir among persons living with HIV who were receiving antiretroviral therapy in the United States: a cohort study. BMC Infect Dis 17, 708 (2017). https://doi.org/10.1186/s12879-017-2808-8 Accepted: 23 October 2017 Anti-retroviral therapy Submission enquiries: [email protected]
CommonCrawl
Experimental Astronomy Design concepts for the Cherenkov Telescope Array CTA: an advanced facility for ground-based high-energy gamma-ray astronomy The CTA Consortium M. Actis G. Agnetta F. Aharonian A. Akhperjanian J. Aleksić E. Aliu D. Allan I. Allekotte F. Antico L. A. Antonelli P. Antoranz A. Aravantinos T. Arlen H. Arnaldi S. Artmann K. Asano H. Asorey J. Bähr A. Bais C. Baixeras S. Bajtlik D. Balis A. Bamba C. Barbier M. Barceló A. Barnacka J. Barnstedt U. Barres de Almeida J. A. Barrio S. Basso D. Bastieri C. Bauer J. Becerra Y. Becherini K. Bechtol J. Becker V. Beckmann W. Bednarek B. Behera M. Beilicke M. Belluso M. Benallou W. Benbow J. Berdugo K. Berger T. Bernardino K. Bernlöhr A. Biland S. Billotta T. Bird E. Birsin E. Bissaldi S. Blake O. Blanch A. A. Bobkov L. Bogacz M. Bogdan C. Boisson J. Boix J. Bolmont G. Bonanno A. Bonardi T. Bonev O. Botner A. Bottani M. Bourgeat C. Boutonnet A. Bouvier S. Brau-Nogué I. Braun T. Bretz M. S. Briggs P. Brun L. Brunetti J. H. Buckley V. Bugaev R. Bühler T. Bulik G. Busetto S. Buson K. Byrum M. Cailles R. Cameron R. Canestrari S. Cantu E. Carmona A. Carosi P. H. Carton M. Casiraghi H. Castarede O. Catalano S. Cavazzani S. Cazaux B. Cerruti M. Cerruti P. M. Chadwick J. Chiang M. Chikawa M. Cieślar M. Ciesielska A. Cillis C. Clerc P. Colin J. Colomé M. Compin P. Conconi V. Connaughton J. Conrad J. L. Contreras P. Coppi M. Corlier P. Corona O. Corpace D. Corti J. Cortina H. Costantini G. Cotter B. Courty S. Couturier S. Covino J. Croston G. Cusumano M. K. Daniel F. Dazzi A. de Angelis E. de Cea del Pozo E. M. de Gouveia Dal Pino O. de Jager I. de la Calle Pérez G. De La Vega B. De Lotto M. de Naurois E. de Oña Wilhelmi V. de Souza B. Decerprit C. Deil E. Delagnes G. Deleglise C. Delgado T. Dettlaff A. Di Paolo F. Di Pierro C. Díaz J. Dick H. Dickinson S. W. Digel D. Dimitrov G. Disset A. Djannati-Ataï M. Doert W. Domainko D. Dorner M. Doro J.-L. Dournaux D. Dravins L. Drury F. Dubois R. Dubois G. Dubus C. Dufour D. Durand J. Dyks M. Dyrda E. Edy K. Egberts C. Eleftheriadis S. Elles D. Emmanoulopoulos R. Enomoto J.-P. Ernenwein M. Errando A. Etchegoyen A. D. Falcone K. Farakos C. Farnier S. Federici F. Feinstein D. Ferenc E. Fillin-Martino D. Fink C. Finley J. P. Finley R. Firpo D. Florin C. Föhr E. Fokitis Ll. Font G. Fontaine A. Fontana A. Förster L. Fortson N. Fouque C. Fransson G. W. Fraser L. Fresnillo C. Fruck Y. Fujita Y. Fukazawa S. Funk W. Gäbele S. Gabici A. Gadola N. Galante Y. Gallant B. García R. J. García López D. Garrido L. Garrido D. Gascón C. Gasq M. Gaug J. Gaweda N. Geffroy C. Ghag A. Ghedina M. Ghigo E. Gianakaki S. Giarrusso G. Giavitto B. Giebels E. Giro P. Giubilato T. Glanzman J.-F. Glicenstein M. Gochna V. Golev M. Gómez Berisso A. González F. González F. Grañena R. Graciani J. Granot R. Gredig A. Green T. Greenshaw O. Grimm J. Grube M. Grudzińska J. Grygorczuk V. Guarino L. Guglielmi F. Guilloux S. Gunji G. Gyuk D. Hadasch D. Haefner R. Hagiwara J. Hahn A. Hallgren S. Hara M. J. Hardcastle T. Hassan T. Haubold M. Hauser M. Hayashida R. Heller G. Henri G. Hermann A. Herrero J. A. Hinton D. Hoffmann W. Hofmann P. Hofverberg D. Hrupec H. Huan B. Huber J.-M. Huet G. Hughes K. Hultquist T. B. Humensky J.-F. Huppert A. Ibarra J. M. Illa J. Ingjald Y. Inoue S. Inoue K. Ioka C. Jablonski A. Jacholkowska M. Janiak P. Jean H. Jensen T. Jogler I. Jung P. Kaaret S. Kabuki J. Kakuwa C. Kalkuhl R. Kankanyan M. Kapala A. Karastergiou M. Karczewski S. Karkar N. Karlsson J. Kasperek H. Katagiri K. Katarzyński N. Kawanaka B. Kȩdziora E. Kendziorra B. Khélifi D. Kieda T. Kifune T. Kihm S. Klepser W. Kluźniak J. Knapp A. R. Knappy T. Kneiske J. Knödlseder F. Köck K. Kodani K. Kohri K. Kokkotas N. Komin A. Konopelko K. Kosack R. Kossakowski P. Kostka J. Kotuła G. Kowal J. Kozioł T. Krähenbühl J. Krause H. Krawczynski F. Krennrich A. Kretzschmann H. Kubo V. A. Kudryavtsev J. Kushida N. La Barbera V. La Parola G. La Rosa A. López G. Lamanna P. Laporte C. Lavalley T. Le Flour A. Le Padellec J.-P. Lenain L. Lessio B. Lieunard E. Lindfors A. Liolios T. Lohse S. Lombardi A. Lopatin E. Lorenz P. Lubiński O. Luz E. Lyard M. C. Maccarone T. Maccarone G. Maier P. Majumdar S. Maltezos P. Małkiewicz C. Mañá A. Manalaysay G. Maneva A. Mangano P. Manigot J. Marín M. Mariotti S. Markoff G. Martínez M. Martínez A. Mastichiadis H. Matsumoto S. Mattiazzo D. Mazin T. J. L. McComb N. McCubbin I. McHardy C. Medina D. Melkumyan A. Mendes P. Mertsch M. Meucci J. Michałowski P. Micolon T. Mineo N. Mirabal F. Mirabel J. M. Miranda T. Mizuno B. Moal R. Moderski E. Molinari I. Monteiro A. Moralejo C. Morello K. Mori G. Motta F. Mottez E. Moulin R. Mukherjee P. Munar H. Muraishi K. Murase A. StJ. Murphy S. Nagataki T. Naito T. Nakamori K. Nakayama C. Naumann D. Naumann P. Nayman D. Nedbal A. Niedźwiecki J. Niemiec A. Nikolaidis K. Nishijima S. J. Nolan N. Nowak P. T. O'Brien I. Ochoa Y. Ohira M. Ohishi H. Ohka A. Okumura C. Olivetto R. A. Ong R. Orito M. Orr J. P. Osborne M. Ostrowski L. Otero A. N. Otte E. Ovcharov I. Oya A. Oziȩbło S. Paiano J. Pallota J. L. Panazol D. Paneque M. Panter R. Paoletti G. Papyan J. M. Paredes G. Pareschi R. D. Parsons M. Paz Arribas G. Pedaletti A. Pepato M. Persic P. O. Petrucci B. Peyaud W. Piechocki S. Pita G. Pivato Ł. Płatos R. Platzer L. Pogosyan M. Pohl G. Pojmański J. D. Ponz W. Potter E. Prandini R. Preece H. Prokoph G. Pühlhofer M. Punch E. Quel A. Quirrenbach P. Rajda R. Rando M. Rataj M. Raue C. Reimann O. Reimann A. Reimer O. Reimer M. Renaud S. Renner J.-M. Reymond W. Rhode M. Ribó M. Ribordy J. Rico F. Rieger P. Ringegni J. Ripken P. Ristori S. Rivoire L. Rob S. Rodriguez U. Roeser P. Romano G. E. Romero S. Rosier-Lees A. C. Rovero F. Roy S. Royer B. Rudak C. B. Rulten J. Ruppel F. Russo F. Ryde B. Sacco A. Saggion V. Sahakian K. Saito T. Saito N. Sakaki E. Salazar A. Salini F. Sánchez M. Á. Sánchez Conde A. Santangelo E. M. Santos A. Sanuy L. Sapozhnikov S. Sarkar V. Scalzotto V. Scapin M. Scarcioffolo T. Schanz S. Schlenstedt R. Schlickeiser T. Schmidt J. Schmoll M. Schroedter C. Schultz J. Schultze A. Schulz U. Schwanke S. Schwarzburg T. Schweizer J. Seiradakis S. Selmane K. Seweryn M. Shayduk R. C. Shellard T. Shibata M. Sikora J. Silk A. Sillanpää J. Sitarek C. Skole N. Smith D. Sobczyńska M. Sofo Haro H. Sol F. Spanier D. Spiga S. Spyrou V. Stamatescu A. Stamerra R. L. C. Starling Ł. Stawarz R. Steenkamp C. Stegmann S. Steiner N. Stergioulas R. Sternberger F. Stinzing M. Stodulski U. Straumann A. Suárez M. Suchenek R. Sugawara K. H. Sulanke S. Sun A. D. Supanitsky P. Sutcliffe M. Szanecki T. Szepieniec A. Szostek A. Szymkowiak G. Tagliaferri H. Tajima H. Takahashi K. Takahashi L. Takalo H. Takami R. G. Talbot P. H. Tam M. Tanaka T. Tanimori M. Tavani J.-P. Tavernet C. Tchernin L. A. Tejedor I. Telezhinsky P. Temnikov C. Tenzer Y. Terada R. Terrier M. Teshima V. Testa L. Tibaldo O. Tibolla C. J. Todero Peixoto F. Tokanai M. Tokarz K. Toma D. F. Torres G. Tosti T. Totani F. Toussenel P. Vallania G. Vallejo J. van der Walt C. van Eldik J. Vandenbroucke H. Vankov G. Vasileiadis V. V. Vassiliev I. Vegas L. Venter S. Vercellone C. Veyssiere J. P. Vialle M. Videla P. Vincent J. Vink N. Vlahakis L. Vlahos P. Vogler A. Vollhardt F. Volpe H. P. von Gunten S. Vorobiov R. M. Wagner B. Wagner S. P. Wakely P. Walter R. Walter R. Warwick P. Wawer R. Wawrzaszek N. Webb P. Wegner A. Weinstein Q. Weitzel R. Welsing H. Wetteskind R. White A. Wierzcholska M. I. Wilkinson D. A. Williams M. Winde Ł. Wiśniewski A. Wolczko M. Wood Q. Xiong T. Yamamoto K. Yamaoka R. Yamazaki S. Yanagita B. Yoffo M. Yonetani A. Yoshida T. Yoshida T. Yoshikoshi V. Zabalza A. Zagdański A. Zajczyk A. Zdziarski A. Zech K. Ziȩtara P. Ziółkowski V. Zitelli P. Zychowski Ground-based gamma-ray astronomy has had a major breakthrough with the impressive results obtained using systems of imaging atmospheric Cherenkov telescopes. Ground-based gamma-ray astronomy has a huge potential in astrophysics, particle physics and cosmology. CTA is an international initiative to build the next generation instrument, with a factor of 5–10 improvement in sensitivity in the 100 GeV–10 TeV range and the extension to energies well below 100 GeV and above 100 TeV. CTA will consist of two arrays (one in the north, one in the south) for full sky coverage and will be operated as open observatory. The design of CTA is based on currently available technology. This document reports on the status and presents the major design concepts of CTA. Ground based gamma ray astronomy Next generation Cherenkov telescopes Design concepts Contact: W. Hofmann ([email protected]), M. Martínez ([email protected]), CTA Project Office, Landessternwarte, Universität Heidelberg, Königstuhl, 69117 Heidelberg, Germany. The present generation of imaging atmospheric Cherenkov telescopes (H.E.S.S., MAGIC and VERITAS) has in recent years opened the realm of ground-based gamma ray astronomy for energies above a few tens of GeV. The Cherenkov Telescope Array (CTA) will explore in depth our Universe in very high energy gamma-rays and investigate cosmic processes leading to relativistic particles, in close cooperation with observatories of other wavelength ranges of the electromagnetic spectrum, and those using cosmic rays and neutrinos. Besides guaranteed high-energy astrophysics results, CTA will have a large discovery potential in key areas of astronomy, astrophysics and fundamental physics research. These include the study of the origin of cosmic rays and their impact on the constituents of the Universe through the investigation of galactic particle accelerators, the exploration of the nature and variety of black hole particle accelerators through the study of the production and propagation of extragalactic gamma rays, and the examination of the ultimate nature of matter and of physics beyond the Standard Model through searches for dark matter and the effects of quantum gravity. With the joining of the US groups of the Advanced Gamma-ray Imaging System (AGIS) project, and of the Brazilian and Indian groups in Spring 2010, and with the strong Japanese participation, CTA represents a genuinely world-wide effort, extending well beyond its European roots. CTA will consist of two arrays of Cherenkov telescopes, which aim to: (a) increase sensitivity by another order of magnitude for deep observations around 1 TeV, (b) boost significantly the detection area and hence detection rates, particularly important for transient phenomena and at the highest energies, (c) increase the angular resolution and hence the ability to resolve the morphology of extended sources, (d) provide uniform energy coverage for photons from some tens of GeV to beyond 100 TeV, and (e) enhance the sky survey capability, monitoring capability and flexibility of operation. CTA will be operated as a proposal-driven open observatory, with a Science Data Centre providing transparent access to data, analysis tools and user training. To view the whole sky, two CTA sites are foreseen. The main site will be in the southern hemisphere, given the wealth of sources in the central region of our Galaxy and the richness of their morphological features. A second complementary northern site will be primarily devoted to the study of Active Galactic Nuclei (AGN) and cosmological galaxy and star formation and evolution. The performance and scientific potential of arrays of Cherenkov telescopes have been studied in significant detail, showing that the performance goals can be reached. What remains to be decided is the exact layout of the telescope array. Ample experience exists in constructing and operating telescopes of the 12-m class (H.E.S.S., VERITAS). Telescopes of the 17-m class are operating (MAGIC) and one 28-m class telescope is under construction (H.E.S.S. II). These telescopes will serve as prototypes for CTA. The structural and optical properties of such telescopes are well understood, as many have been built for applications from radio astronomy to solar power installations. The fast electronics needed in gamma ray astronomy to capture the nanosecond-scale Cherenkov pulses have long been mastered, well before such electronics became commonplace with the Gigahertz transmission and processing used today in telephony, internet, television, and computing. The extensive experience of members of the consortium in the area of conventional photomultiplier tubes (PMTs) provides a solid foundation for the design of cameras with an optimal cost/performance ratio. Consequently, the base-line design relies on conventional PMTs. Advanced photon detectors with improved quantum efficiency are under development and test and may well be available when the array is constructed. In short, all the technical solutions needed to carry out this project exist today. The main challenge lies in the industrialisation of all aspects of the production and the exploitation of economies of scale. Given the large amounts of data recorded by the instrument and produced by computer simulations of the experiment, substantial efforts in e-science and grid computing are envisaged to enable efficient data processing. Some of the laboratories involved in CTA are Tier 1 and 2 centres on the LHC computing grid and the Cosmogrid. Simulation and analysis packages for CTA are developed for the grid. The consortium has set up a CTA-Virtual Organisation within the EGEE project (Enabling Grids for E-sciencE; funded by the European Union) for use of grid infrastructure and the sharing of computing resources, which will facilitate worldwide collaboration for simulations and the processing and analysis of scientific data. Unlike current ground-based gamma-ray instruments, CTA will be an open observatory, with a Science Data Centre (SDC) which provides pre-processed data to the user, as well as the tools necessary for the most common analyses. The software tools will provide an easy-to-use and well-defined access to data from this unique observatory. CTA data will be accessible through the Virtual Observatory, with varying interfaces matched to different levels of expertise. The required toolkit is being developed by partners with experience in SDC management from, for example, the INTEGRAL space mission. Experiments in astroparticle physics have proven to be an excellent training ground for young scientists, providing a highly interdisciplinary work environment with ample opportunities to acquire not only physics skills but also to learn data processing and data mining techniques, programming of complex control and monitoring systems and design of electronics. Further, the environment of the large multi-national CTA Collaboration, working across international borders, ensures that presentation skills, communication ability and management and leadership proficiency are enhanced. Young scientists frequently participate in outreach activities and, thus, hone also their skills in this increasingly important area. With its training and mobility opportunities for young scientists, CTA will have a major impact on society. Outreach activities will be an important part of the CTA operation. Lectures and demonstrations augmented by web-based non-expert tools for viewing CTA data will be offered to pupils and lay audiences. Particularly interesting objects will be featured on the CTA web pages, along the lines of the "Source of the Month" pages of the H.E.S.S. collaboration. CTA is expected to make highly visible contributions towards popularising science and generating enthusiasm for research at the cosmic frontier and to create interest in the technologies applied in this field. 2 CTA, a new science infrastructure In the field of very high energy gamma-ray astronomy (VHE, energies >100 GeV1), the instruments H.E.S.S. (http://www.mpi-hd.mpg.de/hfm/HESS), MAGIC (http://magic.mppmu.mpg.de) and VERITAS (http://veritas.sao.arizona.edu) have been driving the development in recent years. The spectacular astrophysics results from the current Cherenkov instruments have generated considerable interest in both the astrophysics and particle physics communities and have created the desire for a next-generation, more sensitive and more flexible facility, able to serve a larger community of users. The proposed CTA2 (http://www.cta-observatory.org) is a large array of Cherenkov telescopes of different sizes, based on proven technology and deployed on an unprecedented scale (Fig. 1). It will allow significant extension of our current knowledge in high-energy astrophysics. CTA is a new facility, with capabilities well beyond those of conceivable upgrades of existing instruments such as H.E.S.S., MAGIC or VERITAS. The CTA project unites the main research groups in this field in a common strategy, resulting in an unprecedented convergence of efforts, human resources, and know-how. Interest in and support for the project is coming from scientists in Europe, America, Asia and Africa, all of whom wish to use such a facility for their research and are willing to contribute to its design and construction. CTA will offer worldwide unique opportunities to users with varied scientific interests. The number of in particular young scientists working in the still evolving field of gamma-ray astronomy is growing at a steady rate, drawing from other fields such as nuclear and particle physics. In addition, there is increased interest by other parts of the astrophysical community, ranging from radio to X-ray and satellite-based gamma-ray astronomers. CTA will, for the first time in this field, provide open access via targeted observation proposals and generate large amounts of public data, accessible using Virtual Observatory tools. CTA aims to become a cornerstone in a networked multi-wavelength, multi-messenger exploration of the high-energy non-thermal universe. Conceptual layout of a possible Cherenkov Telescope Array (not to scale) 3 The science case for CTA 3.1 Science motivation in a nutshell 3.1.1 Why observing in gamma-rays? Radiation at gamma-ray energies differs fundamentally from that detected at lower energies and hence longer wavelengths: GeV to TeV gamma-rays cannot conceivably be generated by thermal emission from hot celestial objects. The energy of thermal radiation reflects the temperature of the emitting body, and apart from the Big Bang there is and has been nothing hot enough to emit such gamma-rays in the known Universe. Instead, we find that high-energy gamma-rays probe a non-thermal Universe, where other mechanisms allow the concentration of large amounts of energy onto a single quantum of radiation. In a bottom-up fashion, gamma-rays can be generated when highly relativistic particles—accelerated for example in the gigantic shock waves of stellar explosions—collide with ambient gas, or interact with photons and magnetic fields. The flux and energy spectrum of the gamma-rays reflects the flux and spectrum of the high-energy particles. They can therefore be used to trace these cosmic rays and electrons in distant regions of our own Galaxy or even in other galaxies. High-energy gamma-rays can also be produced in a top-down fashion by decays of heavy particles such as hypothetical dark matter particles or cosmic strings, both of which might be relics of the Big Bang. Gamma-rays therefore provide a window on the discovery of the nature and constituents of dark matter. High-energy gamma-rays, as argued above, can be used to trace the populations of high-energy particles in distant regions of our own or in other galaxies. Meandering in interstellar magnetic fields, cosmic rays will usually not reach Earth and thus cannot be observed directly. Those which do arrive have lost all directional information and cannot be used to pinpoint their sources, except for cosmic-rays of extreme energy >1018 eV. However, such high-energy particle populations are an important aspect of the dynamics of galaxies. Typically, the energy content in cosmic rays equals the energies in magnetic fields or in thermal radiation. The pressure generated by high-energy particles drives galactic outflows and helps balance the gravitational collapse of galactic disks. Astronomy with high-energy gamma-rays is so far the only way to directly probe and image the cosmic particle accelerators responsible for these particle populations, in conjunction with studies of the synchrotron radiation resulting form relativistic electrons moving in magnetic fields and giving rise to non-thermal radio and X-ray emission. 3.1.2 A first glimpse of the astrophysical sources of gamma-rays The first images of the Milky Way in VHE gamma-rays have been obtained in the last few years. These reveal a chain of gamma-ray emitters situated along the Galactic equator (see Fig. 2), demonstrating that sources of high-energy radiation are ubiquitous in our Galaxy. Sources of this radiation include supernova shock waves, where presumably atomic nuclei are accelerated and generate the observed gamma-rays. Another important class of objects are "nebulae" surrounding pulsars, where giant rotating magnetic fields give rise to a steady flow of high-energy particles. Additionally, some of the objects discovered to emit at such energies are binary systems, where a black hole or a pulsar orbits a massive star. Along the elliptical orbit, the conditions for particle acceleration vary and hence the intensity of the radiation is modulated with the orbital period. These systems are particularly interesting in that they enable the study of how particle acceleration processes respond to varying ambient conditions. One of several surprises was the discovery of "dark sources", objects which emit VHE gamma rays, but have no obvious counterpart in other wavelength regimes. In other words, there are objects in the Galaxy which might in fact be only detectable in high-energy gamma-rays. Beyond our Galaxy, many extragalactic sources of high-energy radiation have been discovered, located in active galaxies, where a super-massive black hole at the centre of the galaxy is fed by a steady stream of gas and is releasing enormous amounts of energy. Gamma-rays are believed to be emitted from the vicinity of these black holes, allowing the study of the processes occurring in this violent and as yet poorly understood environment. The Milky Way viewed in VHE gamma-rays, in four bands of Galactic longitude [1] 3.1.3 Cherenkov telescopes The recent breakthroughs in VHE gamma-ray astronomy were achieved with ground-based Cherenkov telescopes. When a VHE gamma-ray enters the atmosphere, it interacts with atmospheric nuclei and generates a shower of secondary electrons, positrons and photons. Moving through the atmosphere at speeds higher than the speed of light in air, these electrons and positrons emit a beam of bluish light, the Cherenkov light. For near vertical showers this Cherenkov light illuminates a circle with a diameter of about 250 m on the ground. For large zenith angles the area can increase considerably. This light can be captured with optical elements and be used to image the shower, which vaguely resembles a shooting star. Reconstructing the shower axis in space and tracing it back onto the sky allows the celestial origin of the gamma-ray to be determined. Measuring many gamma-rays enables an image of the gamma-ray sky, such as that shown in Fig. 2, to be created. Large optical reflectors with areas in the 100 m2 range and beyond are required to collect enough light, and the instruments can only be operated in dark nights at clear sites. With Cherenkov telescopes, the effective area of the detector is about the size of the Cherenkov pool at ground. As this is a circle with 250-m diameter this is about 105× larger than the size that can be achieved with satellite-based detectors. Therefore much lower fluxes at higher energies can be investigated with Cherenkov Telescopes, enabling the study of short time scale variability. The Imaging Atmospheric Cherenkov Technique was pioneered by the Whipple Collaboration in the United States. After more than 20 years of development, the Crab Nebula, the first source of VHE gamma-rays, was discovered in 1989. The Crab Nebula is among the strongest sources of very high energy gamma-rays, and is often used as a "standard candle". Modern instruments, using multiple telescopes to track the cascades from different perspectives and employing fine-grained photon detectors for improved imaging, can detect sources down to 1% of the flux of the Crab Nebula. Finely-pixellated imaging was first employed in the French CAT telescope [2], and the use of "stereoscopic" telescope systems to provide images of the cascade from different viewing points was pioneered by the European HEGRA IACT system [3]. For summaries of the achievements in recent years and the science case for a next-generation very high energy gamma ray observatory see [4, 5, 6, 7, 8]. In March 2007, the High Energy Stereoscopic System (H.E.S.S.) project was awarded the Descartes Research Prize of the European Commission for offering "A new glimpse at the highest-energy Universe". Together with the instruments MAGIC and VERITAS (in the northern hemisphere) and CANGAROO (in the southern hemisphere), a new wavelength domain was opened for astronomy, the domain of very high energy gamma-rays with energies between about 100 GeV and about 100 TeV, energies which are a million million times higher than the energy of visible light. At lower energies, in the GeV domain, the launch of a new generation of gamma-ray telescopes (like AGILE, but in particular Fermi, which was launched in 2008) has opened a new era in gamma-ray discoveries [9]. The Large Area Telescope (LAT), the main instrument onboard Fermi, is sensitive to gamma-rays with energies in the range from 20 MeV to about 100 GeV. The energy range covered by CTA will smoothly connect to that of Fermi-LAT and overlap with that of the current generation of ground based instruments and extends to the higher energies, while providing an improvement in both sensitivity and angular resolution. 3.2 The CTA science drivers The aims of the CTA can be roughly grouped into three main themes, serving as key science drivers: Understanding the origin of cosmic rays and their role in the Universe Understanding the nature and variety of particle acceleration around black holes Searching for the ultimate nature of matter and physics beyond the Standard Model Theme 1 comprises the study of the physics of galactic particle accelerators, such as pulsars and pulsar wind nebulae, supernova remnants, and gamma-ray binaries. It deals with the impact of the accelerated particles on their environment (via the emission from particle interactions with the interstellar medium and radiation fields), and the cumulative effects seen at various scales, from massive star forming regions to starburst galaxies. Theme 2 concerns particle acceleration near super-massive and stellar-sized black holes. Objects of interest include microquasars at the Galactic scale, and blazars, radio galaxies and other classes of AGN that can potentially be studied in high-energy gamma rays. The fact that CTA will be able to detect a large number of these objects enables population studies which will be a major step forward in this area. Extragalactic background light (EBL), Galaxy clusters and Gamma Ray Burst (GRB) studies are also connected to this field. Finally, Theme 3 covers what can be called "new physics", with searches for dark matter through possible annihilation signatures, tests of Lorentz invariance, and any other observational signatures that may challenge our current understanding of fundamental physics. CTA will be able to generate significant advances in all these areas. 3.3 Details of the CTA science case We conclude this chapter with a few examples of physics issues that could be significantly advanced with an instrument like CTA. The list is certainly not exhaustive. The physics of the CTA is being explored in detail by many scientists and their findings indicate the huge potential for numerous interesting discoveries with CTA. 3.3.1 Cosmic ray origin and acceleration A tenet of high-energy astrophysics is that cosmic rays (CRs) are accelerated in the shocks of supernova explosions. However, while particle acceleration up to energies well beyond 1014 eV has now clearly been demonstrated with the current generation of instruments, it is by no means proven that supernovae accelerate the bulk of cosmic rays. The large sample of supernovae which will be observable with CTA—in some scenarios several hundreds of objects—and in particular the increased energy coverage at lower and higher energies, will allow sensitive tests of acceleration models and determination of their parameters. Improved angular resolution (arcmin) will help to resolve fine structures in supernova remnants which are essential for the study of particle acceleration and particle interactions. Pulsar wind nebulae surrounding the pulsars (created in supernova explosions) are another abundant source of high-energy particles, including possibly high-energy nuclei. Energy conversion within pulsar winds and the interaction of the wind with the ambient medium and the surrounding supernova shell challenge current ideas in plasma physics. The CR spectrum observed near the Earth can be described by a pure power law up to an energy of a few PeV, where it slightly steepens. The feature is called the "knee". The absence of other features in the spectrum suggests that, if supernova remnants (SNRs) are the sources of galactic CRs, they must be able to accelerate particles at least up to the knee. For this to happen, the acceleration in diffusive shocks has to be fast enough for particles to reach PeV energies before the SNR enters the Sedov phase, when the shock slows down and consequently becomes unable to confine the highest energy CRs [10] Since the initial free expansion velocity of SNRs does not vary much from object to object, only the amplification of magnetic fields can increase the acceleration rate to the required level. Amplification factors of 100–1,000 compared to the interstellar medium value and small diffusion coefficients are needed [11]. The non-linear theory of diffusive shock acceleration suggests that such an amplification of the magnetic field might be induced by the CRs themselves, and high resolution X-ray observations of SNR shocks seem to support this scenario, though their interpretation is debated. Thus, an accurate determination of the intensity of the magnetic field at the shock is of crucial importance for disentangling the origin of the observed gamma-ray emission and understanding the way diffusive shock acceleration works. Even if a SNR can be detected by Cherenkov telescopes during a significant fraction of its lifetime (up to several 104 years), it can make 1015 eV CRs only for a much shorter time (several hundred years), due to the rapid escape of PeV particles from the SNR. This implies that the number of SNRs which have currently a gamma-ray spectrum extending up to hundreds of TeV is very roughly of the order of ∼10. The actual number of detectable objects will depend on the distance and on the density of the surrounding interstellar medium. The detection of such objects (even a few of them) would be extremely important, as it would be clear evidence for the acceleration of CRs up to PeV energies in SNRs. A sensitive scan of the galactic plane with CTA would be an ideal way of searching for these sources. In general, the spectra of radiating particles (both electrons and protons) and therefore also the spectra of gamma-ray radiation, should show characteristic curvature, reflecting acceleration at CR modified shocks. However, to see such curvature, one needs a coverage of a few decades in energy, far from the cutoff region. CTA will provide this coverage. If the general picture of SNR evolution described above is correct, the position of the cutoff in the gamma-ray spectrum depends on the age of the SNR and on the magnetic field at the shock. A study of the number of objects detected as a function of the cutoff energy will allow tests of this hypothesis and constraints to be placed on the physical parameters of SNRs, in particular of the magnetic field strength. CTA offers the possibility of real breakthroughs in the understanding of cosmic rays; as there is the potential to directly observe their diffusion (see, e.g., [12]) The presence of a massive molecular cloud located in the proximity of a SNR (or any kind of CR accelerator) provides a thick target for CR hadronic interactions and thus enhances the gamma-ray emission. Hence, studies of molecular clouds in gamma-rays can be used to identify the sites where CRs are accelerated. While travelling from the accelerator to the target, the spectrum of cosmic rays is a strong function of time, distance to the source, and the (energy-dependent) diffusion coefficient. Depending on the values of these parameters varying proton, and therefore gamma-ray, spectra may be expected. CTA will allow the study of emission depending on these three quantities, which is impossible with current experiments. A determination, with high sensitivity, of spatially resolved gamma-ray sources related to the same accelerator would lead to the experimental determination of the local diffusion coefficient and/or the local injection spectrum of cosmic rays. Also, the observation of the penetration of cosmic rays into molecular clouds will be possible. If the diffusion coefficient inside a cloud is significantly smaller than the average in the neighbourhood, low energy cosmic rays cannot penetrate deep into the cloud, and part of the gamma-ray emission from the cloud is suppressed, with the consequence that its gamma-ray spectrum appears harder than the cosmic-ray spectrum. Both of these effects are more pronounced in the denser central region of the cloud. Thus, with an angular resolution of the order of ≤1 arcmin one could resolve the inner part of the clouds and measure the degree of penetration of cosmic rays [13]. More information on general aspects of cosmic rays and their relationship to VHE gamma observations is available in the review talks and papers presented at the International Cosmic Ray Conference 2009 held in Łódź and the online proceedings are a good source of information [14]. 3.3.2 Pulsar wind nebulae Pulsar wind nebulae (PWNe) currently constitute the most populous class of identified Galactic VHE gamma-ray sources. As is well known, the Crab Nebula is a very effective accelerator (shown by emission across more than 15 decades in energy) but not an effective inverse Compton gamma-ray emitter. Indeed, we see gamma rays from the Crab because of its large spin-down power (∼1038 erg s − 1), although the gamma-ray luminosity is much less than the spin-down power of its pulsar. This can be understood as resulting from a large (mG) magnetic field, which also depends on the spin-down power. A less powerful pulsar would imply a weaker magnetic field, which would allow a higher gamma-ray efficiency (i.e. a more efficient sharing between synchrotron and inverse Compton losses). For instance, HESS J1825-137 has a similar TeV luminosity to the Crab, but a spin-down power that is 2 orders of magnitude smaller, and its magnetic field has been constrained to be in the range of a few, instead of hundreds, of μG. The differential gamma-ray spectrum of the whole emission region from the latter object has been measured over more than two orders of magnitude, from 270 GeV to 35 TeV, and shows indications of a deviation from a pure power law that CTA could confirm and investigate in detail. Spectra have also been determined for spatially separated regions of HESS J1825-137 [15]. Another example is HESS J1303-61 [16] The photon spectra in the different regions show a softening with increasing distance from the pulsar and therefore an energy dependent morphology. If the emission is due to the inverse Compton effect, the pulsar power is not sufficient to generate the gamma-ray luminosity, suggesting that the pulsar had a higher injection power in the past. Is this common for other PWNe and what can that tell us about the evolution of pulsar winds? In the case of Vela X [17], the first detection of what appears to be a VHE inverse Compton peak in the spectral energy distribution (SED) was found. Although a hadronic interpretation has also been put forward it is as yet unclear how large the contribution of ions to the pulsar wind could be. CTA can be used to test leptonic vs. hadronic models of gamma-ray production in PWNe. The return current problem for pulsars have not been solved to date, but if we detect a clear hadronic signal, this will show that ions are extracted from the pulsar surface, which may lead to a solution of the most fundamental question in pulsar magnetospheric physics: how do we close the pulsar current? In systems where we see a clear leptonic signal, it is important to measure the magnetisation (or "sigma") parameter of the PWNe. Are the magnetic fields and particles in these systems in equipartition (as in the Crab Nebula) or do have particle dominated winds? This will contribute significantly to the understanding of the magnetohydrodynamic flow in PWNe. Understanding the time evolution of the multi-wavelength synchrotron and inverse Compton (or hadronic) intensities is also an aim of CTA. Such evolutionary tracks are determined by the nature of the progenitor stellar wind, the properties of the subsequent composite SNR explosion and the surrounding interstellar environment. Finally, the sensitivity and angular resolution achievable with CTA will allow detailed multi-wavelength studies of large/close PWNe, and the understanding of particle propagation, the magnetic field profile in the nebula, and inter-stellar medium (ISM) feedback. The evolution and structure of pulsar wind nebulae is discussed in a recent review [18]. Many key implications for VHE gamma ray measurements, and an assessment of the current observations can be found in [19]. 3.3.3 The galactic centre region It is clear that the galactic centre region itself will be one of the prime science targets for the next generation of VHE instruments [20, 21]. The galactic centre hosts the nearest super-massive black hole, as well as a variety of other objects likely to generate high-energy radiation, including hypothetical dark-matter particles which may annihilate and produce gamma-rays. Indeed, the galactic centre has been detected as a source of high-energy gamma-rays, and indications for high-energy particles diffusing away from the central source and interacting with the dense gas clouds in the central region have been observed. In observations with improved sensitivity and resolution, the galactic centre can potentially yield a variety of interesting results on particle acceleration and gamma-ray production in the vicinity of black holes, on particle propagation in central molecular clouds, and, possibly, on the detection of dark matter annihilation or decay. The VHE gamma-ray view of the galactic centre region is dominated by two point sources, one coincident with a PWN inside SNR G0.9+0.1, and one coincident with the super-massive black hole Sgr A* and another putative PWN (G359.95-0.04). After subtraction of these sources diffuse emission along the galactic centre ridge is visible, which shows two important features: it appears correlated with molecular clouds (as traced by the CS (1–0) line), and it exceeds by a factor of 3 to 9 the gamma-ray emission that would be produced if the same target material was exposed to the cosmic-ray environment in our local neighbourhood. The striking correlation of diffuse gamma-ray emission with the density of molecular clouds within ∼150 pc of the galactic centre favours a scenario in which cosmic rays interact with the cloud material and produce gamma-rays via the decay of neutral pions. The differential gamma-ray flux is stronger and harder than expected from just "passive" exposure of the clouds to the average galactic cosmic ray flux, suggesting one or more nearby particle accelerators are present. In a first approach, the observed gamma-ray morphology can be explained by cosmic rays diffusing away from an accelerator near the galactic centre into the surroundings. Adopting a diffusion coefficient of D = O(1030) cm2/s, the lack of VHE gamma-ray emission beyond 150 pc in this model points to an accelerator age of no more than 104 years. Clearly, improved sensitivity and angular resolution would permit the study of the diffusion process in great detail, including any possible energy dependence. An alternative explanation (which CTA will address) is the putative existence of a number of electron sources (e.g. PWNe) along the galactic centre ridge, correlated with the density of molecular clouds. Given the complexity and density of the source population in the galactic centre region, CTA's improved sensitivity and angular resolution is needed to map the morphology of the diffuse emission, and to test its hadronic or leptonic origin. CTA will also measure VHE absorption in the interstellar radiation field (ISRF). This is impossible for other experiments, like Fermi-LAT, as their energy coverage is too small, and very hard or perhaps impossible for current air Cherenkov experiments, as they lack the required sensitivity. At 8 kpc distance, VHE gamma-ray attenuation due to the CMB is negligible for energies <500 TeV. But the attenuation due to the ISRF (which has a comparable number density at wavelengths 20–300 μm) can produce absorption at about 50 TeV [22]. Observation of the cutoff energy for different sources will provide independent tests and constraints of ISRF models. CTA will observe sources at different distances and thereby independently measure the absorption model and the ISRF. Due to their smaller distances there is less uncertainty in identifying intrinsic and extrinsic features in the spectrum than is the case for EBL studies. 3.3.4 Microquasars, gamma-ray, and X-ray binaries Currently, a handful of VHE gamma-ray emitters are known to be binary systems, consisting of a compact object, a neutron star or a black hole, orbiting a massive star. Whilst many questions on the gamma-ray emission from such systems are still open (in some cases it is not even clear if the energy source is a pulsar-driven nebula around a neutron star or accretion onto a black hole) it is evident that they offer a unique chance to "experiment" with cosmic accelerators. Along the eccentric orbits of the compact objects, the environment (including the radiation field) changes, resulting in a periodic modulation of the gamma-ray emission, allowing the study of how particle acceleration is affected by environmental conditions. Interestingly, the physics of microquasars in our own Galaxy resembles the processes occurring around super-massive black holes in distant active galaxies, with the exception of the much reduced time scales, providing insights in the emission mechanisms at work. The following are key questions in this area which CTA will be able to address, because of the extension of the accessible energy domain, the improvement in sensitivity, and the superior angular resolution it provides: Studies of the formation of relativistic outflows from highly magnetised, rotating objects. If gamma-ray binaries are pulsars, is the gamma-ray emission coming mostly from processes within the pulsar wind zone or rather from particles accelerated in the wind collision shock? Is the answer to this question a function of energy? What role do the inner winds play, particularly with regard to particle injection? Gamma-ray astronomy can provide data that will help to answer these questions, but which will also throw light on the particle energy distribution within the pulsar wind zone itself. Recent Fermi-LAT results on gamma-ray binaries, such as LS I +61 303 and LS 5039 (which are found to be periodic at GeV and TeV energies, although anti-correlated [23]), show the existence of a cutoff in the SED at a few GeV (a feature that was not predicted by any models). Thus, the large energy coverage of CTA is an essential prerequisite in disentangling of the pulsed and continuous components of the radiation and the exploration of the processes leading to the observed GeV–TeV spectral differences. Studies of the link between accretion and ejection around compact objects and transient states associated with VHE emission. It is known that black holes display different spectral states in X-ray emission, with transitions between a low/hard state, where a compact radio jet is seen, to a high/soft state, where the radio emission is reduced by large factors or not detectable at all [24]. Are these spectral changes related to changes in the gamma-ray emission? Is there any gamma-ray emission during non-thermal radio flares (with increased flux by up to a factor of 1,000)? Indeed, gamma-ray emission via the inverse Compton effect is expected when flares occur in the radio to X-ray region, due to synchrotron radiation of relativistic electrons and radiative, adiabatic and energy-dependent escape losses in fast-expanding plasmoids (radio clouds). Can future gamma-ray observations put constraints on the magnetic fields in plasmoids? Continued observations of key objects (such as Cyg X-1) with the sensitivity of current instruments (using sub-arrays of CTA) can provide good coverage. Flares of less than 1 hour at a flux of 10% of the Crab could be detected at the distance of the Galactic Centre. Hence variable sources could be monitored and triggers provided for observations with all CTA telescopes or with other instruments. For short flares, energy coverage in the 10–100 GeV band is not possible with current instruments (AGILE and Fermi-LAT lack sensitivity). Continuous coverage at higher energies is also impossible, due to lack of sensitivity with the current generation of Imaging Atmospheric Cherenkov Telescopes (IACTs). CTA will provide improved access to both regions. Collision of the jet with the ISM, as a non-variable source of gamma-ray emission. Improved angular resolution at high energies will provide opportunities for the study of microquasars, particularly if their jets contain a sizeable fraction of relativistic hadrons. While inner engines will still remain unresolved with future Cherenkov telescope arrays, microquasar jets and their interaction with the ISM might become resolvable, leading to the distinction of emission from the central object (which may be variable) and from the jet-ISM interaction (which may be stable). Microquasars, gamma-ray, and X-ray binaries, and high-energy aspects of astrophysical jets and binaries are discussed in [25]. 3.3.5 Stellar clusters, star formation, and starburst galaxies While the classical paradigm has supernova explosions as the dominant source of cosmic rays, it has been speculated that cosmic rays are also accelerated in stellar winds around massive young stars before they explode as supernovae, or around star clusters [26]. Indeed, there is growing evidence from gamma-ray data for a population of sources related to young stellar clusters and environments with strong stellar winds. However, lack of sensitivity currently prevents the detailed study and clear identification of these sources of gamma radiation. CTA aims at a better understanding of the relationship between star formation processes and gamma-ray emission. CTA can experimentally establish whether there is a direct correlation between star formation rate and gamma-ray luminosity when convection and absorption processes at the different environments are taken into account. Both the VERITAS and H.E.S.S. arrays have done deep observations of the nearest starburst galaxies, and have found them to be emitting TeV gamma-rays at the limit of their sensitivity. Future observations, with improved sensitivity at higher and lower energies, will reveal details of this radiation which in turn will help with an understanding of the spectra, provide constraints on the physical emission scenarios and extend the study of the relationship between star formation processes and gamma-ray emission to extragalactic environments. A good compendium of the current status of this topic can be found in the proceedings of a recent conference [27]. 3.3.6 Pulsar physics Pulsar magnetospheres are known to act as efficient cosmic accelerators, yet there is no complete and accepted model for this acceleration mechanism, a process which involves electrodynamics with very high magnetic fields as well as the effects of general relativity. Pulsed gamma-ray emission allows the separation of processes occurring in the magnetosphere from the emission in the surrounding nebula. That pulsed emission at tens of GeV can be detected with Cherenkov telescopes was recently demonstrated by MAGIC with the Crab pulsar [28] (and the sensitivity for pulsars with known pulse frequency is nearly an order of magnitude higher than for standard sources). Current Fermi-LAT results provide some support for models in which gamma-ray emission occurs far out in the magnetosphere, with reduced magnetic field absorption (i.e. in outer gaps). In these models, exponential cut-offs in the spectral energy distribution are expected at a few GeV, which have already been found in several Fermi pulsars. To make further progress in understanding the emission mechanisms in pulsars it is necessary to study their radiation at extreme energies. In particular, the characteristics of pulsar emission in the GeV domain (currently best examined by the Fermi-LAT) and at VHE will tell us more about the electrodynamics within their magnetospheres. Studies of interactions of magnetospheric particle winds with external ambient fields (magnetic, starlight, CMB) are equally vital. Between ∼10 GeV and ∼50 GeV (where the LAT performance is limited) CTA, with a special low-energy trigger for pulsed sources, will allow a closer look at unidentified Fermi sources and deeper analysis of Fermi pulsar candidates. Above 50 GeV CTA will explore the most extreme energetic processes in millisecond pulsars. The VHE domain will be particularly important for the study of millisecond pulsars, very much as the HE domain (with Fermi) is for classical pulsars. On the other hand, the high-energy emission mechanism from magnetars is essentially unknown. For magnetars, we do not expect polar cap emission. Due to the large magnetic field, all high-energy photons would be absorbed if emitted close to the neutron star, i.e., CTA would be testing outer-gap models, especially if large X-ray flares are accompanied by gamma-emission. CTA can study the GeV-TeV emission related to short-timescale pulsar phenomena, which is beyond the reach of currently working instruments. CTA can observe possible high-energy phenomena related to timing noise (in which the pulse phase and/or frequency of radio pulses drift stochastically) or to sudden increases in the pulse frequency (glitches) produced by apparent changes in the momentum of inertia of neutron stars. Periodicity measurements with satellite instruments, which require very long integration times, may be compromised by such glitches, while CTA, with its much larger detection area and correspondingly shorter measurement times, is not. A good compendium of the current status of this topic can be found in the proceedings and the talks presented at the "International Workshop on the High-Energy Emission from Pulsars and their Systems" [29]. 3.3.7 Active galaxies, cosmic radiation fields and cosmology Active Galactic Nuclei (AGN) are among the largest storehouses of energy known in our cosmos. At the intersection of powerful low-density plasma inflows and outflows, they offer excellent conditions for efficient particle acceleration in shocks and turbulences. AGN represent one third of the known VHE gamma-ray sources, with most of the detected objects belonging to the BL Lac class. The fast variability of the gamma-ray flux (down to minute time scales) indicates that gamma-ray production must occur close to the black hole, assisted by highly relativistic motion resulting in time (Lorentz) contraction when viewed by an observer on Earth. Details of how these jets are launched or even the types of particles of which they consist are poorly known. Multi-wavelength observations with high temporal and spectral resolution can help to distinguish between different scenarios, but this is at the limit of the capabilities of current instruments. The sensitivity of CTA, combined with simultaneous observations in other wavelengths, will provide a crucial advance in understanding the mechanisms driving these sources. Available surveys of BL Lacs suffer several biases at all wavelengths, further complicated by Doppler boosting effects and high variability. The big increase of sensitivity of CTA will provide large numbers of VHE sources of different types and opens the way to statistical studies of the VHE blazar and AGN populations. This will enable the exploration of the relation between different types of blazars, and of the validity of unifying AGN schemes. The distribution in redshift of known and relatively nearby BL Lac objects peaks around z ∼0.3. The large majority of the population is found within z < 1, a range easily accessible with CTA. CTA will therefore be able to analyse in detail blazar populations (out to z ∼2) and the evolution of AGN with redshift and to start a genuine "blazar cosmology". Several scenarios have been proposed to explain the VHE emission of blazars.3 However, none of them is fully self-consistent, and the current data are not sufficient to firmly rule out or confirm a particular mechanism. In the absence of a convincing global picture, a first goal for CTA will be to constrain model-dependent parameters of blazars within a given scenario. This is achievable due to the wide energy range, high sensitivity and high spectral resolution of CTA combined with multi-wavelength campaigns. Thus, the physics of basic radiation models will be constrained by CTA, and some of the models will be ruled out. A second more difficult goal will be to distinguish between the different remaining options and to firmly identify the dominant radiation mechanisms. Detection of specific spectral features, breaks, cut-offs, absorption or additional components, would be greatly helpful for this. The role of CTA as a timing explorer will be decisive for constraining both the radiative phenomena associated with, and the global geometry and dynamics of, the AGN engine. Probing variability down to the shortest time scales will significantly constrain acceleration and cooling times, instability growth rates, and the time evolution of shocks and turbulences. For the brightest blazar flares, current instruments are able to detect variability on the scales of several minutes. With CTA, such flares should be detectable within seconds, rather than minutes. A study of the minimum variability times of AGN with CTA would allow the localisation of VHE emission regions (parsec distance scales in the jet, the base of the jet, or the central engine) and would provide stringent constraints on the emission mechanisms as well as the intrinsic time scale connected to the size of the central super-massive black hole. Recently, radio galaxies have emerged as a new class of VHE emitting AGN [37]. Given the proximity of the sources and the larger jet angle to the line of sight compared to BL Lac objects, the outer and inner kpc jet structures will be spatially resolved by CTA. This will allow precise location of the main emission site and searches for VHE radiation from large-scale jets and hot spots besides the central core and jets seen in very long baseline interferometry images. The observation of VHE emission from distant objects and their surroundings will also offer the unique opportunity to study extragalactic magnetic fields at large distances. If the fields are large, an e + e − pair halo forms around AGNs, which CTA, with its high sensitivity and extended field of view, should be capable of detecting. For smaller magnetic field values, the effect of e + e − pair formation along the path to the Earth is seen through energy-dependent time-delays of variable VHE emission, which CTA with its excellent time resolution will be ideally suited to measure. CTA will also have the potential to deliver for the first time significant results on extragalactic diffuse emission at VHE, and offers the possibility of probing the integrated emission from all sources at these energies. While well measured at GeV energies with the EGRET and Fermi-LAT instruments, the diffuse emission at VHE is extremely challenging to measure due to its faintness and the difficulty of adequately subtracting the background. Here, the improved sensitivity coupled with the large field of view puts detection in reach of CTA. VHE gamma-rays traveling from remote sources interact with the EBL via e + e − pair production and are absorbed. Studying such effects as a function of the energy and redshift will provide unique information on the EBL density, and thereby on the history of the formation of stars and galaxies in the Universe. This approach is complementary to direct EBL measurements, which are hampered by strong foreground emission from our planetary system (zodiacal light) and the Galaxy. We anticipate that MAGIC II and H.E.S.S. II will at least double the number of detected sources, but this is unlikely to resolve the ambiguity between intrinsic spectral features and effects due to the EBL. It would still be very difficult to extract spectral information beyond z > 0.5, if our current knowledge of the EBL is correct. Only CTA will be able to provide a sufficiently large sample of VHE gamma-ray sources, and high-quality spectra for individual objects. For many of the sources, the SED will be determined at GeV energies, which are much less affected by the absorption and, thus, more suitable for the study of the intrinsic properties of the objects. We therefore anticipate that with CTA it will be possible to make robust predictions about the intrinsic spectrum above 40–50 GeV, for individual sources and for particular source classes. The end of the dark ages of the Universe, the epoch of reionisation, is a topic of great interest [38]. Not (yet) fully accessible via direct observations, most of our knowledge comes from simulations and from integral observables like the cosmic microwave background. The first (Population III) and second generations of stars are natural candidates for being the source of reionisation. If the first stars are hot and massive, as predicted by simulations, their UV photons emitted at z > 5 would be redshifted to the near infrared and could leave a unique signature on the EBL spectrum. If the EBL contribution from lower redshift galaxies is sufficiently well known (for example, as derived from source counts) upper limits on the EBL density can be used to probe the properties of early stars and galaxies. Combining detailed model calculations with redshift-dependent EBL density measurements could allow the probing of the reionisation/ionisation history of the Universe. A completely new wavelength region of the EBL will be opened up by observations of sources at very high redshifts (z > 5), which will most likely be gamma-ray bursts. According to high-redshift UV background models, consistent with our current knowledge of cosmic reionisation, spectral cut-offs are expected in the few GeV to few tens of GeV range at z > 5. Thus, CTA could have the unique potential to probe cosmic reionisation models through gamma-ray absorption in high-z GRBs. We analyse the GRB prospects in more detail in the following. A good compendium of the current state of this topic can be found in the talks and the proceedings of the meeting, High-energy phenomena in relativistic outflows II [39]. 3.3.8 Gamma-ray bursts Gamma-Ray Bursts are the most powerful explosions in the Universe, and are by far the most electromagnetically luminous sources known to us. The peak luminosity of GRBs, equivalent to the light from millions of galaxies, means they can be detected up to high redshifts, hence act as probes of the star formation history and reionisation of the Universe. The highest measured GRB redshift is z = 8.2 but GRBs have been observed down to z = 0.0085 (the mean redshift is z∼2.2). GRBs occur in random directions on the sky, briefly outshining the rest of the hard X-ray and soft gamma-ray sky, and then fade from view. The rapid variability seen in gamma- and X-rays indicates a small source size, which together with their huge luminosities and clearly non-thermal spectrum (with a significant high-energy tail) require the emitting region to move toward us with a very large bulk Lorentz factor of typically >100, sometimes as high as >1,000 [40, 41, 42]. Thus, GRBs are thought to be powered by ultra-relativistic jets produced by rapid accretion onto a newly formed stellar-mass black hole or a rapidly rotating highly-magnetised neutron star (i.e. a millisecond magnetar). The prompt gamma-ray emission is thought to originate from dissipation within the original outflow by internal shocks or magnetic reconnection events. Some long duration GRBs are clearly associated with core-collapse supernovae of type Ic (from very massive Wolf–Rayet stars stripped of their H and He envelope by strong stellar winds), while the progenitors of short GRBs are much less certain: the leading model involves the merger of two neutron stars or a neutron star and a black hole [43, 44]. Many of the details of GRB explosions remain unclear. Studying them requires a combination of rapid observations to observe the prompt emission before it fades, and a wide energy range to properly capture the spectral energy distribution. Most recently, GRBs have been observed by the Swift and Fermi missions, which have revealed an even more complex behaviour than previously thought, featuring significant spectral and temporal evolution. As yet, no GRB has been detected at energies >100 GeV due to the limited sensitivity of current instruments and the large typical redshifts of these events. In just over a year of operation, the Fermi-LAT has detected emission above 10 GeV (30 GeV) from 4 (2) GRBs. In many cases, the LAT detects emission >0.1 GeV for several hundred seconds in the GRB rest-frame. In GRB090902B a photon of energy ∼33.4 GeV was detected, which translates to an energy of ∼94 GeV at its redshift of z = 1.822. Moreover, the observed spectrum is fairly hard up to the highest observed energies. Extrapolating the Fermi spectra to CTA energies suggests that a good fraction of the bright LAT GRBs could be detected by CTA even in ∼minute observing times, if it could be turned to look at the prompt emission fast enough. The faster CTA could get on target, the better the scientific return. Increasing the observation duty cycle by observing for a larger fraction of the lunar cycle and at larger zenith angles could also increase the return. Detecting GRBs in the CTA energy range would greatly enhance our knowledge of the intrinsic spectrum and the particle acceleration mechanism of GRBs, particularly when combined with data from Fermi and other observatories. As yet it is unclear what the relative importance is of the various proposed emission processes, which divide mainly into leptonic (synchrotron and inverse-Compton, and in particular synchrotron-self-Compton) and hadronic processes (induced by protons or nuclei at very high energies which either radiate synchrotron emission or produce pions with subsequent electromagnetic cascades). CTA may help to determine the identity of the distinct high-energy component that was observed so far in three out of the four brightest LAT GRBs. The origin of the high-energy component may in turn shed light on the more familiar lower-energy components that dominate at soft gamma-ray energies. The bulk Lorentz factor and the composition (protons, e + e − pairs, magnetic fields) of the outflows are also highly uncertain and may be probed by CTA. The afterglow emission which follows the prompt emission is significantly fainter, but should also be detectable in some cases. Such detections would be expected from bright GRBs at moderate redshift, not only from the afterglow synchrotron-self-Compton component, but perhaps also from inverse-Compton emission triggered by bright, late (hundreds to thousands of seconds) flares that are observed in about half of all Swift GRBs. The discovery space at high energies is large and readily accessible to CTA. The combination of GRBs being extreme astrophysical sources and cosmological probes make them prime targets for all high-energy experiments. With its large collecting area, energy range and rapid response, CTA is by far the most powerful and suitable VHE facility for GRB research and will open up a new energy range for their study. 3.3.9 Galaxy clusters Galaxy clusters are storehouses of cosmic rays, since all cosmic rays produced in the galaxies of the cluster since the beginning of the Universe will be confined there. Probing the density of cosmic rays in clusters via their gamma-ray emission thus provides a calorimetric measure of the total integrated non-thermal energy output of galaxies. Accretion/merger shocks outside cluster galaxies provide an additional source of high-energy particles. Emission from galaxy clusters is predicted at levels just below the sensitivity of current instruments [45]. Clusters of galaxies are the largest, gravitationally-bound objects in the Universe. The observation of mainly radio (and in some cases X-ray) emission proves the existence of non-thermal phenomena therein, but gamma-rays have not yet been detected. A possible additional source of non-thermal radiation from clusters is the annihilation of dark matter (DM). The increased sensitivity of CTA will help to establish the DM signal, and CTA could possibly be the first instrument to map DM at the scale of galaxy clusters. 3.3.10 Dark matter and fundamental physics The dominant form of matter in the Universe is the as yet unknown dark matter, which is most likely to exist in the form of a new class of particles such as those predicted in supersymmetric or extra dimensional extensions to the standard model of particle physics. Depending on the model, these DM particles can annihilate or decay to produce detectable Standard Model particles, in particular gamma-rays. Large dark matter densities due to the accumulation in gravitational potential wells leads to detectable fluxes, especially for annihilation, where the rate is proportional to the square of the density. CTA is a discovery instrument with unprecedented sensitivity for this radiation and also an ideal tool to study the properties of the dark matter particles. If particles beyond the standard model are discovered (at the Large Hadron Collider or in underground experiments), CTA will be able to verify whether they actually form the dark matter in the Universe. Slow-moving dark matter particles could give rise to a striking, almost mono-energetic photon emission. The discovery of such line emission would be conclusive evidence for dark matter. CTA might have the capability to detect gamma-ray lines even if the cross-section is loop-suppressed, which is the case for the most popular candidates of dark matter, i.e. those inspired by the minimal supersymmetric extensions to the standard model (MSSM) and models with extra dimensions, such as Kaluza-Klein theories. Line radiation from these candidates is not detectable by Fermi, H.E.S.S. II or MAGIC II, unless optimistic assumptions on the dark matter density distribution are made. Recent updates of calculations regarding the gamma-ray spectrum from the annihilation of MSSM dark matter indicate the possibility of final-state contributions giving rise to distinctive spectral features (see the reviews in [46]). The more generic continuum contribution (arising from pion production) is more ambiguous but, with its curved shape, potentially distinguishable from the usual power-law spectra exhibited by known astrophysical sources. Our galactic centre is one of the most promising regions to look for dark matter annihilation radiation due to its predicted very high dark matter density. It has been observed by many experiments so far (e.g. H.E.S.S., MAGIC and VERITAS) and high-energy gamma emission has been found. However, the identification of dark matter in the galactic centre is complicated by the presence of many conventional source candidates and the difficulties of modelling the diffuse gamma-ray background adequately. The angular and energy resolution of CTA, as well as its enhanced sensitivity will be crucial to disentangling the different contributions to the radiation from the galactic centre. Other individual targets for dark matter searches are dwarf spheroidals and dwarf galaxies. They exhibit large mass-to-light ratios, and allow dark matter searches with low astrophysical backgrounds. With H.E.S.S., MAGIC and Fermi-LAT, some of these objects were observed and upper limits on dark matter annihilation calculated, which are currently about an order of magnitude above the prediction of the most relevant cosmological models. CTA will have good sensitivity for Weakly Interacting Massive Particle (WIMP) annihilation searches in the low and medium energy domains. An improvement in flux sensitivity of 1–2 orders of magnitude over current instruments is expected. Thus CTA will allow tests in significant regions of the MSSM parameter space. Dark matter would also cause spectral and spatial signatures in extra-galactic and galactic diffuse emission. While the emissivity of conventional astrophysical sources scale with the local matter density, the emissivity of annihilating dark matter scales with the density squared, causing differences in the small-scale anisotropy power spectrum of the diffuse emission. Recent measurements of the positron fraction presented by the PAMELA Collaboration [47] point towards a relatively local source of positrons and electrons, especially if combined with the measurement of the e + e − spectrum by Fermi-LAT [48]. The main candidates being put forward are either pulsar(s) or dark matter annihilation. One way to distinguish between these two hypotheses is the spectral shape. The dark matter spectrum exhibits a sudden drop at an energy which corresponds to the dark matter particle mass, while the pulsar spectrum falls off more smoothly. Another hint is a small anisotropy, either in the direction of the galactic centre (for dark matter) or in the direction of the nearest mature pulsars. The large effective area of CTA, about six orders of magnitudes larger than for balloon- and satellite-borne experiments, and the greatly improved performance compared to existing Cherenkov observatories, might allow the measurement of the spectral shape and even the tiny dipole anisotropy. If the PAMELA result originated from dark matter, the DM particle's mass would be >1 TeV/c2, i.e. large in comparison to most dark matter candidates in MSSM and Kaluza-Klein theories. With its best sensitivity at 1 TeV, CTA would be well suited to detect dark matter particles of TeV/c2 masses. The best sensitivity of Fermi-LAT for dark matter is at masses of the order of 10–100 GeV/c2. Electrons and positrons originating from dark matter annihilation or decay also produce synchrotron radiation in the magnetic fields present in the dense regions where the annihilation might take place. This opens up the possibility of multi-wavelength observations. Regardless of the wavelength domain in which dark matter will be detectable using present or future experiments, it is evident that CTA will provide coverage for the highest-energy part of the multi-wavelength spectrum necessary to pinpoint, discriminate and study dark matter indirectly. Due to their extremely short wavelength and long propagation distances, very high-energy gamma-rays are sensitive to the microscopic structure of space-time. Small-scale perturbations of the smooth space-time continuum should manifest themselves in an (extremely small) energy dependence of the speed of light. Such a violation of Lorentz invariance, on which the theory of special relativity is based, is present in some quantum gravity (QG) models. Burst-like events in which gamma-rays are produced, e.g. in active galaxies, allow this energy-dependent dispersion of gamma-rays to be probed and can be used to place limits on certain classes of quantum gravity scenarios, and may possibly lead to the discovery of effects associated with Planck-scale physics. CTA has the sensitivity to detect characteristic time-scales and QG effects in AGN light curves (if indeed any exist) on a routine basis without exceptional source flux states and in small observing windows. CTA can resolve time scales as small as few seconds in AGN light curves and QG effects down to 10 s. Very good sensitivity at energies >1 TeV is especially important to probe the properties of QG effects at higher orders. Fermi recently presented results based on observations of a GRB which basically rule out linear-in-energy variations of the speed of light up to 1.2× the Planck scale [49] To test quadratic or higher order dependencies the sensitivity provided by CTA will be needed. This topic is thoroughly discussed in the book "Particle dark matter" edited by G. Bertone [46], and aspects of the fundamental physics implications of VHE gamma-ray observations are covered in a recent review [50]. 3.3.11 Imaging stars and stellar surfaces The quest for better angular resolution in astronomy is driving much of the instrumentation developments throughout the world, from gamma-rays through low-frequency radio waves. The optical region is optimal for studying objects with stellar temperatures, and the current frontier in angular resolution is represented by optical interferometers such as ESO's VLTI in Chile or the CHARA array in California. Recently, these have produced images of giant stars surrounded by ejected gas shells and revealed the oblate shapes of stars deformed by rapid rotation. However, such phase interferometers are limited by atmospheric turbulence to baselines of no more than some 100 m, and to wavelengths longer than the near infrared. Only very few stars are large enough to be imaged by current facilities. To see smaller details (e.g. magnetically active regions, planet-forming disks obscuring parts of the stellar disk) requires interferometric baselines of the order of 1 km. It has been proposed to incorporate such instruments on ambitious future space missions (Luciola Hypertelescope for the ESA Cosmic Vision; Stellar Imager as a NASA vision mission), or to locate them on the Earth in regions with the best-possible seeing, e.g. in Antarctica (KEOPS array). However, the complexity and cost of these concepts seems to put their realisation beyond the immediate planning horizon. An alternative that can be realised much sooner is offered by CTA, which could become the first kilometre-scale optical imager. With many telescopes distributed over a square km or more, its unprecedented optical collecting area forms an excellent facility for ultrahigh angular resolution (sub-milliarcsecond) optical imaging through long-baseline intensity interferometry. This method was originally developed by Hanbury Brown and Twiss in the 1950s [51] for measuring the sizes of stars. It has since been extensively used in particle physics ("HBT interferometry") but it has had no recent application in astronomy because it requires large telescopes spread out over large distances, which were not available until the recent development of atmospheric Cherenkov telescopes. The great observational advantages of intensity interferometry are its lack of sensitivity to atmospheric disturbances and to imperfections in the optical quality of the telescopes. This is because of the electronic (rather than optical) connection of telescopes. The noise relates to electronic timescales of nanoseconds (and light-travel distances of centimetres or metres) rather than to those of the light wave itself (femtoseconds and nanometres). The requirements are remarkably similar to those for studying Cherenkov light: large light-collecting telescopes, high-speed optical detectors with sensitivity extending into the blue, and real-time handling of the signals on nanosecond levels. The main difference to ordinary Cherenkov Telescope operation lies in the subsequent signal analysis which digitally synthesises an optical telescope. From the viewpoint of observatory operations, it is worth noting that bright stars can be measured for interferometry during bright-sky periods of full Moon, which would hamper Cherenkov studies. Science targets include studying the disks and surfaces of hot and bright stars [52, 53] Rapidly rotating stars naturally take on an oblate shape, with an equatorial bulge that, for stars rotating close to their break-up speed, may extend into a circumstellar disk, while the regions with higher effective gravity near the stellar poles become overheated, driving a stellar wind. If the star is observed from near its equatorial plane, an oblate image results. If the star is instead observed from near its poles, a radial temperature gradient should be seen. Possibly, stars with rapid and strong differential rotation could take on shapes, midway between that of a doughnut and a sphere. The method permits studies in both broad-band optical light and in individual emission lines, and enables the mapping of gas flows between the components in close binary stars. 3.3.12 Measurements of charged cosmic rays Cherenkov telescopes can contribute to cosmic ray physics by detecting these particles directly [54]. CTA can provide measurements of the spectra of cosmic-ray electrons and nuclei in the energy regime where balloon- and space-borne instruments run out of data. The composition of cosmic rays has been measured by balloon- and space-borne instruments (e.g. TRACER) up to ≈ 100 TeV. Starting at about 1 PeV instruments can detect air showers at ground level (e.g. KASCADE). Such air shower experiments have, however, difficulties in identifying individual nuclei, and consequently their composition results are of lower resolution than direct measurements. Cherenkov telescopes are the most promising candidates to close the experimental gap between the TeV and PeV domains, and will probably achieve better mass resolution than ground based particle arrays. Additionally, CTA can perform crucial measurements of the spectrum of cosmic-ray electrons. TeV electrons have very short lifetimes and thus propagation distances due to their rapid energy loss. The upper end of the electron spectrum (which is not accessible by current balloon and satellite experiments) is therefore expected to be dominated by local electron accelerators and the cosmic-ray electron spectrum can provide valuable information about characteristics of the contributing sources and of the electron propagation. While such measurements involve analyses that differ from the conventional gamma-ray studies, a proof-of-principle has already been performed with the H.E.S.S. telescopes. Spectra of electrons and iron nuclei have been published [55]. The increase in sensitivity expected from CTA will provide significant improvements in such measurements. 3.4 The CTA legacy The CTA legacy will most probably not be limited to individual observations addressing the issues mentioned above, but also comprise a survey of the inner Galactic plane and/or, depending on the final array capabilities, a deep survey of all or part of the extragalactic sky. Surveys provide coverage of large parts of the sky, maximise serendipitous detections, allow for optimal use of telescope time, and thereby ensure the legacy of the project for the future scientific community. Surveys of different extents and depths are among the scientific goals of all major facilities planned or in operation at all wavelengths. In view of both H.E.S.S. (see Fig. 2) and Fermi-LAT survey results, the usefulness of surveys is unquestioned, and many of the scientific cases discussed above can be encompassed within such an observational strategy. Two possible CTA survey schemes have been studied to date: All-sky survey: With an effective field-of-view of 5°, 500 pointings of 0.5 hours would cover a survey area of a quarter of the sky at the target sensitivity of 0.01 Crab. Hence, using about a quarter of the observing time in a year, a quarter of the sky can be surveyed down to a level of <0.01 Crab, which is equivalent to the flux level of the faintest AGN currently detected at VHE energies. Galactic plane survey: The H.E.S.S. Galactic plane survey covered 1.5% of the sky, at a sensitivity of 0.02 Crab above 200 GeV, using about 250 hours of observing time. The increase in CTA sensitivity means that a similar investment in time can be expected to result in a sensitivity of 2-3 mCrab over the accessible region of the Galactic plane. The high-energy phenomena which can be studied with CTA span a wide field of galactic and extragalactic astrophysics, of plasma physics, particle physics, dark matter studies, and investigations of the fundamental physics of space-time. They carry information on the birth and death of stars, on the matter circulation in the Galaxy, and on the history of the Universe. Optimisation of the layout of CTA with regards to these different science goals is a difficult task and detailed studies of the response of different array configurations to these scientific problems being conducted during the Design Study and the Preparatory Phase. 4 Advancing VHE gamma-ray astronomy with CTA The latest generation of ground-based gamma-ray instruments (H.E.S.S., MAGIC, VERITAS, Cangaroo III (http://icrhp9.icrr.u-tokyo.ac.jp) and MILAGRO (http://www.lanl.gov/milagro)) allow the imaging, photometry and spectroscopy of sources of high energy radiation and have ensured that VHE gamma ray studies have grown to become a genuine branch of astronomy. The number of known sources of VHE gamma rays is exceeding 100, and source types include supernovae, pulsar wind nebulae, binary systems, stellar winds, various types of active galaxies and unidentified sources without obvious counterparts. H.E.S.S. has conducted a highly successful survey of the Milky Way covering about 600 square degrees, which resulted in the detection of tens of new sources. However, a survey of the full visible sky would require at least a decade of observations, which is not feasible. Due to the small fluxes, instruments for detection of high-energy gamma rays (above some 10 GeV) require a large effective detection area, eliminating space-based instruments which directly detect the incident gamma rays. Ground-based instruments allow much larger detection areas. They measure the particle cascade induced when a gamma ray is absorbed in the atmosphere, either by using arrays of particle detectors to record the cascade particles which reach the ground (or mountain altitudes), or by using Cherenkov telescopes to image the Cherenkov light emitted by secondary electrons and positrons in the cascade. Compared to Cherenkov telescopes, air shower arrays (such as MILAGRO, AS-gamma or ARGO) have the advantage of a large duty cycle—they can observe during the daytime—and of a large solid angle coverage. However, their current sensitivity is such that they can only detect sources with a flux around the level of the flux from the Crab Nebula, the strongest known steady source of VHE gamma rays. Results from air shower arrays demonstrate that there are relatively few sources emitting at this level. The recent rapid evolution of VHE gamma-ray astronomy was therefore primarily driven by Cherenkov instruments, which reach sensitivities of 1% of the Crab flux for typical observing times of 25 h, and which provide significantly better angular resolution. While there are proposals for better air shower arrays with improved sensitivity (e.g. the HAWC project), which will certainly offer valuable complementary information, such approaches will not be able to compete in sensitivity with next-generation Cherenkov telescopes. The properties of the major current and historic Cherenkov instruments are listed in Table 1. The instruments consist of up to four Cherenkov telescopes (or 5 for the H.E.S.S. II upgrade). They reach sensitivities of about 1% of the flux of the Crab Nebula at energies in the 100 GeV–1 TeV range. Sensitivity degrades towards lower energies, due to threshold effects, and towards higher energies, due to the limited detection area. A typical angular resolution is 0.1° or slightly better for single gamma rays. Sufficiently intense sources can be located with a precision of 10–20′′. Properties of selected air-Cherenkov instruments, including two of historical interest (HEGRA and CAT) Lat (°) Long (°) Alt (m) FoV (°) Thresh (TeV) Sensitivity (% Crab) Total (m2) MAGIC Ia+II CANGAROO-III Whipplea Adapted from [56] aThese instruments have pixels of two different sizes All these instruments are operated by the groups who built them, with very limited access for external observers and no provision for open data access. Such a mode is appropriate for current instruments, which detect a relatively limited number of sources, and where the analysis and interpretation can be handled by the manpower and experience accumulated in these consortia. However, a different approach is called for in next-generation instruments, with their expected ten-fold increase in the number of detectable objects. CTA will advance the state of the art in astronomy at the highest energies of the electromagnetic spectrum in a number of decisive areas, all of which are unprecedented in this field: European and international integration CTA will for the first time bring together and combine the experience of all virtually all groups world-wide working with atmospheric Cherenkov telescopes. Performance of the instrument CTA aims to provide full-sky view, from a southern and a northern site, with unprecedented sensitivity, spectral coverage, angular and timing resolution, combined with a high degree of flexibility of operation. Details are addressed below. Operation as an open observatory The characteristics listed above imply that CTA will, for the first time in this field, be operated as a true observatory, open to the entire astrophysics (and particle physics) community, and providing support for easy access and analysis of data. Data will be made publicly available and will be accessible through Virtual Observatory tools. Service to professional astronomers will be supplemented by outreach activities and interfaces for laypersons to the data. Technical implementation, operation, and data access While based on existing and proven techniques, the goals of CTA imply significant advances in terms of efficiency of construction and installation, in terms of the reliability of the telescopes, and in terms of data preparation and dissemination. With these characteristics, the CTA observatory is qualitatively different from experiments such as H.E.S.S., MAGIC or VERITAS and the increase in capability goes well beyond anything that could ever be achieved through an expansion or upgrade of existing instruments. Science performance goals for CTA include in particular: Sensitivity CTA will be about a factor of 10 more sensitive than any existing instrument. It will therefore for the first time allow detection and in-depth study of large samples of known source types, will explore a wide range of classes of suspected gamma-ray emitters beyond the sensitivity of current instruments, and will be sensitive to new phenomena. In its core energy range, from about 100 GeV to several TeV, CTA will have milli-Crab sensitivity, a factor of 1,000 below the strength of the strongest steady sources of VHE gamma rays, and a factor of 10,000 below the highest fluxes measured in bursts. This dynamic range will not only allow study of weaker sources and of new source types, it will also reduce the selection bias in the taxonomy of known types of sources. Energy range Wide-band coverage of the electromagnetic spectrum is crucial for understanding the physical processes in sources of high-energy radiation. CTA is aiming to cover, with a single facility, three to four orders of magnitude in energy range. Together with the much improved precision and lower statistical errors, this will enable astrophysicists to distinguish between key hypotheses such as the leptonic or hadronic origin of gamma rays from supernovae. Combined with the Fermi gamma-ray observatory in orbit, an unprecedented seamless coverage of more than seven orders of magnitude in energy can be achieved. Angular resolution Current instruments are able to resolve extended sources, but they cannot probe the fine structures visible in other wavebands. In supernova remnants, for example, the exact width of the gamma-ray emitting shell would provide a sensitive probe of the acceleration mechanism. Selecting a subset of gamma-ray induced cascades detected simultaneously by many of its telescopes, CTA can reach angular resolutions in the arc-minute range, a factor of 5 better than the typical values for current instruments. Temporal resolution With its large detection area, CTA will resolve flaring and time-variable emission on sub-minute time scales, which are currently not accessible. In gamma-ray emission from active galaxies, variability time scales probe the size of the emitting region. Current instruments have already detected flares varying on time scales of a few minutes, requiring a paradigm shift concerning the phenomena in the vicinity of the super-massive black holes at the cores of active galaxies, and concerning the jets emerging from them. CTA will also enable access to episodic and periodic phenomena such as emission from inner stable orbits around black holes or from pulsars and other objects where frequent variations and glitches in period smear the periodicity when averaging over longer periods. Flexibility Consisting of a large number of individual telescopes, CTA can be operated in a wide range of configurations, allowing on the one hand the in-depth study of individual objects with unprecedented sensitivity, and on the other hand the simultaneous monitoring of tens of potentially flaring objects, and any combination in between (see Fig. 3). Survey capability A consequence of this flexibility is the dramatically enhanced survey capability of CTA. Groups of telescopes can point at adjacent fields in the sky, with their fields of view overlapping, providing an increase of sky area surveyed per unit time by an order of magnitude, and for the first time enabling a full-sky survey at high sensitivity. Number of sources Extrapolating from the intensity distribution of known sources, CTA is expected to enlarge the catalogue of objects detected from currently several tens of objects to about 1,000 objects. Global coverage and integration Ultimately, CTA aims to provide full sky coverage from multiple observatory sites, using transparent access and identical tools to extract and analyse data. Some of the possible operating modes of CTA: a very deep observations, b combining monitoring of flaring sources with deep observations, c a survey mode allowing full-sky surveys The feasibility of the performance goals listed above is borne out by detailed simulations of arrays of telescopes, using currently available technology (details are given below). The implementation of CTA does requires significant advances in the engineering, construction and operation of the array, and the data access. These issues are addressed in the design study and the preparatory phase of CTA. Issues include: Construction, installation and commissioning of the telescopes To reach the performance targets, tens of telescopes of 2–3 different types will be required, and the design of the telescopes must be optimised in terms of their construction cost, making best use of the economics of large-scale production. In current instruments, consisting at most of a handful of identical telescopes, design costs were a substantial fraction of total costs, enforcing a different balance between design and production costs. The design of the telescopes will have to concentrate on modularity and ease of installation and commissioning. Reliability The reliability of current instruments is far from perfect, and down-times of individual telescopes due to hardware or software problems are non-negligible. For CTA, telescope design and software must provide significantly improved reliability. Frequent down-times of individual telescopes in the array or of pixels within a telescope not only require substantial technical on-site support and cause higher operating costs, but in particular they make the data analysis much more complicated, requiring extensive simulations for each configuration of active telescopes, and inevitably result in systematic errors which are likely to limit the achievable sensitivity. Operation scheduling and monitoring The large flexibility provided by the CTA array also raises new challenges concerning the scheduling of observations, taking into account the state of the array and the state of the atmosphere. For example, sky conditions may allow "discovery observations" in certain parts of the sky, but may prevent precise, deep observations of a source. Availability of a given telescope may be critical for certain types of observations, but may not matter at all in modes where the array is split up in many sub-arrays tracking different sources at somewhat reduced sensitivity. To make optimum use of the facility, novel scheduling algorithms will need to be developed, and the monitoring of the atmosphere over the full sky needs to be brought to a new level of precision. Data access So far, none of the current Cherenkov telescopes has made data publicly available, or has tools for efficient non-expert data access. Cherenkov telescopes are inherently more complicated than, say, X-ray satellite instruments in that they do not directly take images of the sky, but rather require extensive processing to go from the Cherenkov images to the parameters of the primary gamma ray. Depending on the emphasis in the data analysis—maximum detection rate, lowest energy threshold, best sensitivity, or highest angular resolution—there is a wide range of selection parameters, all resulting in different effective detection areas and instrument characteristics. Effective detection areas also depend on zenith angle, orientation relative to the Earth's magnetic field, etc. Background subtraction is critical in particular for extended sources which may cover a significant fraction of the field of view. Providing efficient data access and analysis tools represents a major challenge and requires significant lead times and extensive software prototyping and tests. 5 Performance of Cherenkov Telescope Arrays In order to achieve improvements of a factor of 10 in several areas, it is essential to understand and review the factors limiting the performance, and to establish the extent to which limitations are of technical nature which can be overcome with sufficient effort (e.g. due to a given size of the camera pixels or point spread function (PSF) of the re ector), and to which extent they represent fundamental limitations of the technique (e.g. due to unavoidable fluctuations in the development of air showers). To detect a cosmic gamma-ray source in a given energy band, three conditions have to be fulfilled: The number of detected gamma rays N γ has to exceed a minimum value, usually taken to be between 5 and 10 gamma rays. The number of gamma rays is the product of flux ϕ γ , effective detection area A, observing time T (usually for sensitivity evaluation taken as between 25 and 50 h) and a detection efficiency ε γ which is typically not too far below unity. The number of detected gamma rays and hence the effective area A are virtually always the limiting factor at the high-energy end of the useful energy range. For example, to detect a 1% Crab source above 100 TeV, which equivalent to a flux of 2 ×10 − 16 cm − 2 sec − 1, in 50 h, an area A of ≥ 30 km2 is required. The statistical significance of the gamma ray excess has to exceed a certain number of standard deviations, usually taken to be 5. For background dominated observations of faint sources, significance can be approximated as \(N_\gamma/\sqrt{N_{bg}}\) where the background events N bg arise from cosmic ray nuclei, cosmic ray electrons, local muons, or random images caused by night-sky background (NSB). Background events are usually distributed more or less uniformly across the useful field of view of the instrument. Their number is given by the flux per unit solid angle, ϕ bg , the solid angle Ω src over which gamma rays from a candidate source (and hence background) are accumulated, the effective detection area A bg , the observation time and a background rejection factor ϵ bg . The sensitivity limit ϕ γ is hence proportional to \(\sqrt{\epsilon_{bg} A_{bg} T\Omega_{src}}/(\epsilon_\gamma A_\gamma T) \sim \sqrt{\Omega_{src}}/\sqrt{\epsilon_{bg} A T}\) (assuming A bg ∼A γ ). In current instruments, electron and cosmic nucleon backgrounds limit the sensitivity in the medium to lower part of their energy range. The systematic error on the number of excess gamma rays due to uncertainties in background estimates and background subtraction has to be sufficiently small, and has to be accounted for in the calculation of the significance. Fluctuations in the background rates due to changes in voltages, pulse shapes, calibration, in particular when non-uniform over the field of view, or in the cut efficiencies, e.g. due to non-uniform NSB noise, will result in such background systematics. Effectively, this means that a minimal signal-to-background ratio is required to safely detect a source. The systematic limitation becomes important in the limit of small statistical errors, when event numbers are very large due to large detection areas, observation times, or low energy thresholds resulting in high count rates. Since both signal and background scale with A and T, the systematic sensitivity limit is proportional to the relative background rate, ϕ γ ∼(ϵ bg Ω src )/ϵ γ . For current instruments, background uncertainties at a level of a few % have been reported [57]. High reliability and availability of telescopes and pixels as well as improved schemes for calibration and monitoring will be crucial in controlling systematic errors and exploiting the full sensitivity of the instrument. An accuracy of the background modelling and subtraction of 1% seems reasonable and is assumed in the following. Systematic errors may still limit sensitivity in the sub-100 GeV range. Figure 4 illustrates the various sensitivity limitations in the context of a simple toy model. Obviously, sensitivity is boosted by large effective area A, efficient rejection of background, i.e. small ϵ bg , and in the case of point-like structures by good angular resolution δ with \(\Omega_{src} \propto \delta^2\). Sensitivity gains can furthermore be achieved with a large field of view of the instrument, observing multiple sources at a time and effectively multiplying the attainable observation time T. Toy model of a telescope array to illustrate limiting sensitivity, quoted as the minimal detectable fraction of the Crab flux per energy band Δlog10(E) = 0.2 (assuming a simple power law for the Crab flux and ignoring the change in spectral index at low energy). The model assumes an energy-independent effective detection area of 1 km2, a gamma-ray efficiency of ϵ γ of 0.5, the same efficiency for detection of cosmic-ray electrons, a cosmic-ray efficiency after cuts of ϵ bg = 0.01, an angular resolution δ of 0.1° defining the integration region Ω src , and a systematic background uncertainty of 1%. The model takes into account that cosmic-ray showers generate less Cherenkov light than gamma-ray showers, and are hence reconstructed at lower equivalent gamma-ray energy. At high energy, the sensitivity is limited by the gamma-ray count rate (black line), at intermediate energies by electron (red) and cosmic-ray backgrounds (green), and at low energies, in the area of high statistics, by systematic background uncertainty (purple). The plot includes also the effect of the PSF improving like 1/\(\sqrt{\rm E}\) (with PSF = 0.1° for 80% containment at 200 GeV) The annual exposure time amounts to about 1,000 h of useful moonless observation time per year, varying by maybe 20% between good and excellent sites. Observations with partial moon may increase this by a factor of 1.5, at the expense of reduced performance, depending on the amount of stray light. Some instruments, such as MAGIC, routinely operate under moonlight [58]. While in principle more than 500 h per year can be dedicated to a given source (depending on its RA, and the maximum zenith angle under which observations are carried out), in practice rarely more than 50 h to at most 100 h are dedicated to a given source per year. With the increased number of sources detectable for CTA, there will be pressure to reduce the time per source compared to current observations. In real systems, the effective area A, background rejection ϵ bg and angular resolution δ depend on gamma-ray energy, since a minimal number of detected Cherenkov photons (around 50–100) are required to detect and analyse an image, and since the quality of shower reconstruction depends both on the statistics of detected photons and shower particles. The performance of the instrument depends on whether gamma-ray energies are in the sub-threshold regime, near the nominal energy threshold, or well above threshold. In the sub-threshold regime, the amount of Cherenkov light is below the level needed for the trigger logic, at a sufficiently low rate of random triggers due to NSB photons. Only showers with upward fluctuations in the amount of Cherenkov light will occasionally trigger the system. At GeV energies these fluctuations are large and there is no sharp trigger threshold. Energy measurement in this domain is strongly biased. In the threshold regime, there is usually enough Cherenkov light for triggering the system but the signal in each telescope may still be too low for (a) location of the image centroid, (b) determination of the direction of the image major axis, or (c) accurate energy assignment. Frequently, a higher threshold than that given by the trigger is imposed in the data analysis. Most showers with upward fluctuations will be reconstructed in a narrow energy range at the trigger (or analysis) threshold. Sources with cut-offs below the analysis threshold may be detectable but only at very high flux levels. Good imaging and spectroscopic performance of the instrument is only available at energies ≥1.5× the trigger threshold. High sensitivity over a wide energy range, therefore, requires an instrument which is able to detect a sufficient number of Cherenkov photons for low energy showers, which covers a very large area for high-energy showers, and which provides high angular resolution and background rejection. High angular resolution is also crucial to resolve fine structures in extended sources such as supernova remnants. On the other hand, for the detection of extended sources, the integration region Ω src is determined by the source size rather than the angular resolution and cosmic-ray rejection becomes a most critical parameter in minimising statistical and systematic uncertainties. A crucial question is therefore to which extent angular resolution and cosmic-ray rejection can be influenced by the design of the instrument, by parameters such as the number of Cherenkov photons detected or the size of the photo-sensor pixels. Simulation studies assuming an ideal instrument [59], one which detects all Cherenkov photons reaching the ground with perfect resolution for impact point and photon direction, show that achievable resolution and background rejection are ultimately limited by fluctuations in the shower development. Angular resolution is in addition influenced by the deflection of shower particles in the Earth's magnetic field, making the reconstructed shower direction dependent on the energy sharing between electron and positron in the first conversion of a gamma ray (Fig. 5). However, these resolution limits (Fig. 6) are well below the resolution achieved by current instruments. At 1 TeV, a resolution below one arc-minute is in principle achievable. Similar conclusions appear to hold for cosmic-ray background rejection. There is a virtually irreducible background due to events in which, in the first interaction of a cosmic ray, almost all the energy is transferred to one or a few neutral pions and, therefore, to electromagnetic cascades (see, e.g. [60]). However, with their typical cosmic-ray rejection factors of >103 at TeV energies, current instruments still seem 1–2 orders of magnitude away from this limit, offering space for improvement. Such improvements could result from improved imaging of the air shower, both in terms of resolution and photon statistics, and from using a large and sensitive array to veto cosmic-ray induced showers based on the debris frequently emitted at relatively large angles to the shower axis. Two low-energy gamma-ray showers developing in the atmosphere. Both gamma rays were incident vertically. The difference in shower direction results from the energy sharing between electron and positron in the first conversion and the subsequent deflection in the Earth's magnetic field Limiting angular resolution of Cherenkov instruments as a function of gamma-ray energy, derived from a likelihood fit to the directions of all Cherenkov photons reaching the ground, and assuming perfect measurement of photon impact point and direction. At low energies, the resolutions differ in the bending plane of the Earth's magnetic field (open symbols) and in the orthogonal direction (closed symbols). The simulations assume near-vertical incidence at the H.E.S.S. site in Namibia At low energies, cosmic-ray electrons become the dominant background, due to their steep spectrum. Electrons and gamma-rays cannot be distinguished efficiently using shower characteristics, as both induce electromagnetic cascades. The height of the shower maximum differs by about one radiation length [61], but this height also fluctuates from shower to shower by about one radiation length, rendering an efficient rejection impossible. A technique which is beyond the capability of current instruments but might become possible with future arrays is to detect Cherenkov radiation from the primary charged particle and use it as a veto [59]. Detection of the "direct Cherenkov light" has been proposed [54] and successfully applied [62] for highly charged primary nuclei such as iron, where Cherenkov radiation is enhanced by a factor of Z 2. While in a 100 m2 telescope, an iron nucleus generates O(1000) detected photons, a charge-1 primary will provide at most a few photons, not far from night sky noise levels. Larger telescopes, possibly with improved photo-sensors, fine pixels and high temporal resolution, could enable detection of primary Cherenkov light from electrons, at the expense of gamma-ray efficiency, since gamma-rays converting at high altitude will be rejected, too, and since unrelated nearby cosmic rays may generate fake vetos. Nevertheless, this approach (not yet studied in detail) may help at the lowest energies where event numbers are high but there are large uncertainties in the background systematics. Sakahian et al. [63] note that at energies <20 GeV, deflection of electrons in the Earth's magnetic field is sufficiently large to disperse Cherenkov photons over a larger area on the ground, reducing light density and therefore the electron-induced trigger rate. The effect is further enhanced by a dispersion in photon arrival times. In summary, it is clear that the performance of Cherenkov telescope arrays can be improved significantly, before fundamental limitations are reached. 6 The Cherenkov Telescope Array The CTA consortium plans to operate from one site in the southern and one in the northern hemisphere, allowing full-sky coverage. The southern site will cover the central part of the galactic plane and see most of the galactic sources and will therefore be designed to have sensitivity over the full energy range. The northern site will be optimised for extragalactic astronomy, and will not require coverage of the highest energies. Determining the arrangement and characteristics of the CTA telescopes in the two arrays is a complex optimisation problem, balancing cost against performance in different bands of the spectrum. This section will address the general criteria and considerations for this optimisation, while the technical implementation is covered in the following sections. 6.1 Array layout Given the wide energy range to be covered, a uniform array of identical telescopes, with fixed spacing, is not the most efficient solution for the CTA. For the purpose of discussion, separation into three energy ranges, without sharp boundaries, is appropriate: The low-energy range \(\boldsymbol{\le}\) 100 GeV To detect showers down to a few tens of GeV, the Cherenkov light needs to be sampled and detected efficiently, with the fraction of area covered by light collectors being of the order of 10% (assuming conventional PMT light sensors). Since event rates are high and systematic background uncertainties are likely to limit the achievable sensitivity, the area of this part of the array can be relatively small, being of order of a few 104 m2. Efficient photon detection can be achieved either with few large telescopes or many telescopes of modest size. For very large telescopes, the cost of the dish structures dominates, for small telescopes the photon detectors and electronics account for the bulk of the cost. A (shallow) cost optimum in terms of cost per telescope area is usually reached for medium-sized telescopes in the 10–15 m diameter range. However, if small to medium-sized telescopes are used in this energy range, the challenge is to trigger the array, since no individual telescope detects enough Cherenkov photons to provide a reliable trigger signal. Trigger systems which combine and superimpose images at the pixel level in real time, with a time resolution of a few ns, can address this issue [64] but represent a significant challenge, given that a single 1,000-pixel telescope sampled at (only) 200 MHz and 8 bits per pixel generates a data stream of more than one Tb/s. CTA designs conservatively assume a small number of very large telescopes, typically with about a 20–30 m dish diameter, to cover the low energy range. The core energy range from about 100 GeV to about 10 TeV shower detection and reconstruction in this energy range are well understood from current instruments, and an appropriate solution seems a grid of telescopes of the 10–15 m class, with a spacing of about 100 m. Improved sensitivity is obtained both by the increased area covered, and by the higher quality of shower reconstruction, since showers are typically imaged by a larger number of telescopes than is the case for current few-telescope arrays. For the first time, array sizes will be larger than the Cherenkov light pool, ensuring that images will be uniformly sampled across the light pool, and that a number of images are recorded close to the optimum distance from the shower axis (about 70–150 m), where the light intensity is large and intensity fluctuations are small, and where the shower axis is viewed under a sufficiently large angle for efficient reconstruction of its direction. At H.E.S.S. for example, events which are seen and triggered by all four telescopes provide significantly improved resolution and strongly reduced backgrounds, but represent only a relatively small fraction of events. Unless energies are well above trigger threshold, only events with shower core locations within the telescope square can trigger all telescopes. A further advantage is that an extended telescope grid operated with a two-telescope trigger condition will have a lower threshold than a small array, since there are always telescopes sufficiently close to the shower core. The high-energy range above 10 TeV Here, the key limitation is the number of detected gamma-ray showers and the array needs to cover multi-km2 areas. At high energies the light yield is large, so showers can be detected well beyond the 150-m radius of a typical Cherenkov light pool. Two implementation options can be considered: either a large number of small telescopes with mirror areas of a few m2 and spacing matched to the size of the light pool of 100–200 m, or a smaller number of larger telescopes with some 10 m2 area which can see showers up to distance of ≥500 m, and can hence be deployed with a spacing of several 100 m, or in widely separated subclusters of a few telescopes. While it is not immediately obvious which options offers best cost/performance ratio at high energies, the subcluster concept with larger telescopes has the advantage of providing additional high-quality shower detection towards lower energies, for impact points near the subcluster. Figure 7 shows possible geometries of arrays with separate regions optimized for low, intermediate and high energies. A quadrant of possible array schemes promising excellent sensitivity over an extended energy range, as suggested by the Monte Carlo studies. The centre of the installation is near the upper left corner. Telescope diameters are not drawn to scale. In the upper right part, clusters of telescopes of the 12-m class are shown at the perimeter, while in the lower left part an option with wide-angle telescopes of the 3–4 m class is shown 6.2 Telescope layout Irrespective of the technical implementation details, as far as its performance is concerned, a Cherenkov telescope is primarily characterised by its light collection capability, i.e. the product of mirror area, photon collection efficiently and photon detection efficiency, by its field of view and by its pixel size, which limits the size of image features which can be resolved. The optical system of the telescope should obviously be able at achieve a point spread function matched to the pixel size. The electronics for signal capture and triggering should provide a bandwidth matched to the length of Cherenkov pulses of a few nanoseconds. The performance of an array is also dependent on the triggering strategy; Cherenkov emission from air showers has to be separated in real time from the high flux of night sky background photons, based on individual images and global array information. The huge data stream from Cherenkov telescopes does not allow untriggered recording. The required light collection capability in the different parts of the array is determined by the energy thresholds, as outlined in the previous section. In the following, field of view, pixel size and the requirements on the readout system and trigger system are reviewed. 6.2.1 Field of view Besides mirror area, an important telescope design parameter is the field of view. A relatively large field of view is mandatory for the widely spaced telescopes of the high-energy array, since the distance of the image from the camera centre scales with the distance of the impact point of the air shower from the telescope. For the low- and intermediate-energy arrays, the best choice of the field of view is not trivial to determine. From the science point of view, large fields of view are highly desirable, since they allow: the detection of high-energy showers at large impact distance without image truncation; the efficient study of extended sources and of diffuse emission regions; and large-scale surveys of the sky and the parallel study of many clustered sources, e.g. in the band of the Milky Way. In addition, a larger field of view generally helps in improving the uniformity of the camera and reducing background systematics. However, larger fields of view for a given pixel size, result in rapidly growing numbers of photo-sensor pixels and electronics channels. Large fields of view also require technically challenging telescope optics. With the current single-mirror optics and f/d ratios in the range up to 1.2, an acceptable point spread function is obtained out to 4–5°. Larger fields of view with single-mirror telescopes require increased f/d ratios, in excess of 2 for a 10° field of view (see Fig. 8, [65]), which are mechanically difficult to realise, since a large and heavy focus box needs to be supported at a long distance from the dish. Also, the single-mirror optics solutions which provide the best imaging use Davies–Cotton or elliptical dish geometries, which in turn result in a time dispersion of shower photons which seriously impacts on the trigger performance once dish diameters exceed 15 m. An alternative solution is the use of secondary mirrors. Using non-spherical primaries and secondaries, good imaging over fields of up to 10° diameter can be achieved [66]. Disadvantages are the increased cost and complexity, significant shadowing of the primary mirror by the secondary, and complex alignment issues if faceted primary and secondary mirrors are used. With the resulting large range of incidence angles of photons onto the camera, can imply that baffling of albedo also becomes an issue. Focal ratio required for sufficiently precise shower imaging, as a function of the half angle of the field of view [65]. Points: simulations for spherical design (green), parabolic design with constant radii (red), Davies–Cotton design (violet), parabolic design with adjusted radii (blue). Lines: third-order approximation for a single-piece paraboloid (red) and a single-piece sphere (green) The choice of the field of view therefore requires that the science gains and the cost and increased complexity be carefully balanced. When searching for unknown source types which are not associated with non-thermal processes in other, well-surveyed wavelength domains, a large field of view helps, as several sources may appear in typical fields of view. This increases the effective observation time per source by a corresponding factor compared to an instrument which can look only at one source at a time. An instrument with CTA-like sensitivity is expected to detect of the order of 1,000 sources. In the essentially one-dimensional galactic plane, there will always be multiple sources in a field of view. In extragalactic space, the average angular distance between (an estimated 500) sources would be about 10°, implying that even for the maximum conceivable fields of view the gain is modest. Even in the galactic plane, a very large field of view will not be the most cost-effective solution, since the gain in terms of the number of sources viewed simultaneously scales essentially with the diameter of the field of view, given that sources are likely to cluster within a fraction of a degree from the plane, whereas camera costs scale with the diameter squared. A very rough estimate based on typical dish costs and per-channel pixel and readout costs suggests an economic optimum in the cost per source-hour at around a diameter of 6–8° field of view. The final choice of the field of view will have to await detailed studies related to dish and mirror technology and costs, and the per-channel cost of the detection system. Sensitivity estimates given below do not include an enhancement factor accounting for multiple sources in the field of view, but effective exposure time should increase by factors of ≥4 for Galactic sources, and sensitivity correspondingly by factors of ≥2. 6.2.2 Pixel size The size of focal plane pixels is another parameter which requires careful optimisation. Figure 9 illustrates how a shower image is resolved at pixel sizes ranging from 0.28° (roughly the pixel size of the HEGRA telescopes) down to pixel sizes of 0.07°, as used for example in the large H.E.S.S. II telescope. The cost of focal plane instrumentation is currently driven primarily by the number of pixels and, therefore, scales like the square of the inverse pixel size. The gain due to the use of small pixels depends strongly on the analysis technique. In the classical second-moment analysis, performance seems to saturate for pixels smaller than 0.2–0.15° [67]. Analysis techniques which use the full image distribution (e.g. [68]), on the other hand, can extract the information contained in the well-collimated head part of high-intensity images, as compared to the more diffuse tail, and benefit from pixel sizes as small as 0.06–0.03° [59, 66]. Pixel size also influences trigger strategies. For large pixels, gamma-ray images are contiguous, allowing straight-forward topological triggers, whereas for small pixels, low-energy gamma-ray images may have gaps between triggered pixels. Part of the field of view of cameras with different pixel sizes (0.07, 0.10, 0.14, 0.20, and 0.28°) but identical field-of-view (of about 6°), viewing the same shower (460 GeV gamma-ray at 190 m core distance) with a 420 m2 telescope. Low-energy showers would be difficult to register, both with very small pixels (signal not contiguous in adjacent pixels) and with very large pixels (not enough pixels triggered above the increased thresholds, due to high NSB rates) The final decision concerning pixel size (and telescope field of view) will to a significant extent be driven by the cost per pixel. Current simulations favour pixel sizes of 0.07–0.1° for the large telescopes, allowing the resolution of compact low-energy images and reducing the rate of NSB photons in each pixel, 0.15–0.2° for the medium size telescopes, similar to the pixel sizes used by H.E.S.S. and VERITAS, and 0.2–0.3° for the pixels of the telescopes in the halo of the array, where large fields of view are required but shower images also tend to be long due to the large impact distances and the resulting viewing angles. Studies to determine the benefits of smaller pixels, as are proposed for AGIS-type dual-mirror telescopes (http://tmva.sourceforge.net), are underway for the medium-sized telescopes. 6.2.3 Signal recording Most modern telescopes use some kind of transient recorders to capture pixel signals, either with analogue switched-capacitor systems or with fast digitisers [69], so that, at least in principle, signal shape and timing can be used in the image analysis. Signal shape and timing can be employed in two ways: (a) to reject backgrounds such as hadronic showers and local muons; and (b) to reduce the signal integration windows and hence the amount of NSB noise in the shower image. For example, muon rejection based on signal waveform is discussed in [70]. Quantifying how much background rejection can be improved using these techniques is non-trivial. The effect of signal-shape image selection is correlated with other cuts imposed in the analysis. For single telescopes, signal shape and timing can provide significant improvements. For telescope systems, the cuts on image shapes in multiple telescopes are already very powerful and background events passing these cuts will have images and signal shapes that look very much like those of gamma-rays, so that less improvement is expected, if any. The second area where signal waveform recording can improve performance concerns the signal amplitudes. In particular for larger shower impact parameters, photon arrival times are not isochronous across the image (Fig. 10), and photons in the "tail" end of the image arrive with significant delays compared to those from its "head". Use of variable and matched integration windows across the image allows the extraction of shower signals with minimal contamination from NSB noise. Use of signal shape and timing information is already used in the current MAGIC [71] and VERITAS systems, and these results will help to guide final design choices for CTA. Integrated signal (upper left) and 1 ns samples of the development of a 10 TeV gamma shower at 250 m core distance as seen in a telescope with optics and pixels similar to a H.E.S.S.-1 telescope but with a FoV of 10° diameter. Pixels near the "head" of the shower have a pulse width dominated by the single photoelectron pulse width, while those in the "tail" of the shower see longer pulses. The shower image moves across almost half the FoV in about 25 ns The performance numbers quoted for the simulations described below are conservative in that they are based on fixed (and relatively large) signal integration windows. Improvements can be expected once the use of image shape information is fully understood. 6.2.4 Trigger The trigger scheme and readout electronics are closely related and fundamentally influence the design and performance of the telescope array. For most applications, multi-telescope trigger coincidence is required to reject backgrounds at the trigger level and to reduce the load on the data acquisition system. The main issue here is how much information is exchanged between telescopes, and how image information is stored while the trigger decision is made. One extreme scenario is to let each telescope trigger independently and only exchange a trigger flag with neighbouring telescopes, allowing identification of coincident triggers (e.g. [72]). The energy threshold of the system is then determined by the minimum threshold at which a telescope can trigger. The other extreme is to combine signals from different telescopes at the pixel level, either in analogue or digital form, and to extract common image features. In this case, the system energy threshold could be well below the thresholds of individual telescopes, which is important when the array is made up of many small or medium-sized telescopes. However, the technical complexity of such a solution is significant. There is a wide range of intermediate solutions, where trigger pre-processors extract image features, such as the image centroid, on a telescope basis and the system trigger decision includes this information. In cases where individual telescopes generate a local trigger, pixel signals need to be stored while a global trigger decision is made. The time for which signals can be stored without introducing deadtime, is typically ms in the case of digital storage and μs if analogue storage is used, which strongly influences the design of higher level triggers. Trigger topology is another important issue. Triggers can either be derived locally within the array by some trigger logic connecting neighbouring telescopes, or all trigger information can be routed to a central station where a global decision is made, which is then propagated back to the telescopes. The first approach requires shorter signal storage at the telescopes and is more easily scaled up to large arrays, the second provides maximum flexibility. Whether local or global, trigger schemes will employ a multi-level hierarchy, with a first trigger level acting on pixels and pixel groups, and higher levels using information on image topology and/or the topology of triggered telescopes in the array. As in modern high-energy physics experiments, trigger decisions will, to the extent possible, be performed using programmable rather than "hardwired" processors. If the signal is recorded using fast digitisers, even the first-level discrimination of pixel signals could be implemented digitally in the gate array controlling the digitiser, instead of applying analogue thresholds. Whatever implementation is chosen, it is important that the trigger system is very flexible and software-configurable, since operation modes vary from deep observations, where all telescopes follow the same source, to monitoring or survey applications, where groups of a few telescopes or even single telescopes point in different directions. The simulations discussed below assume a very conservative approach. Each telescope makes an independent trigger decision with thresholds defined such that the telescope trigger rate is in the manageable range of a few to some tens of kHz. This is followed by a global decision based on the number of triggered telescopes. 6.3 CTA performance summary Section 8 gives a detailed description of the layout and performance studies conducted so far for CTA. Many candidate layouts have been considered. Here we provide a brief description of the nature and performance of one promising configuration (E), which is illustrated in Fig. 18. This configuration utilises three telescope types: four 24 m telescopes with 5° field-of-view and 0.09° pixels, 23 telescopes of 12 m diameter with 8° field-of-view and 0.18° pixels, and 32 telescopes of 7 m diameter with a 10° field-of-view and 0.25° pixels. The telescopes are distributed over ∼3 km2 on the ground and the effective collection area of the array is considerably larger than this at energies beyond 10 TeV. The sensitivity of array E from detailed calculations and using standard data analysis techniques is shown in Fig. 23. More sophisticated analyses result in sensitivities that are ∼20% better across the whole energy range. As Fig. 23 shows, such an array performs an order of magnitude better than an instrument like H.E.S.S. over most of required energy range. Figure 25 shows the angular resolution of this array, which approaches one arcminute at high energies. The energy resolution of layout E is better than 10% above a few hundred GeV. Array layout E has a nominal construction cost of 80 M€ and meets the main design goals of CTA. Given that the configuration itself, and the analysis methods used, have not yet been optimised, it is likely that a significantly better sensitivity can be achieved with an array of this nominal cost which follows the same basic concept. Therefore, despite the uncertainties in the cost model employed (see Section 7.5), we are confident that the design goals of CTA can be realised at close to the envisaged cost. 7 Realizing CTA This section provides a brief overview of the position of CTA in the European and global context, the organisation of CTA during the various stages, of its operation as an open observatory, of the potential sites envisaged for CTA, and of the schedule for and cost of CTA design, construction and operation. 7.1 CTA and the European strategy in astrophysics and astroparticle physics CTA, as a major future facility for astroparticle physics, is firmly embedded in the European processes guiding science in the fields of astronomy and astroparticle physics. The European Strategy Forum on Research Infrastructures (ESFRI) ESFRI is a strategic organisation whose objective is to promote the scientific integration of Europe, to strengthen the European Research Area and to increase its international impact. A first Roadmap for pan-European research infrastructures was released in 2006, listing CTA as an "emerging project". In the December 2008 update of this Roadmap, CTA was included as one of eight Physical Sciences and Engineering projects, together with facilities such as E-ELT, KM3Net and SKA. As such, CTA is eligible for FP7 Preparatory Phase funding. The CTA application for this funding was successful, providing up to 5.2 M€ for the preparation of the construction of the observatory in 3 years time. The contracts with the EC are in the process of being finalised and signed. The Astroparticle Physics European Coordination (ApPEC) group ApPEC was created to enhance coordination in astroparticle physics across Europe. It has stimulated cooperation and convergence between competing groups in Europe, and has initiated the production of a European roadmap in astroparticle physics, on which CTA is one of the key projects. ASPERA ASPERA is a network of national government agencies responsible for coordinating and funding national research efforts in Astroparticle Physics. One of the tasks of ASPERA is to create a scientific roadmap for Astroparticle Physics (http://www.aspera-eu.org/images/stories/roadmap/aspera_roadmap.pdf) and link it with the more general European scientific infrastructure roadmap. A Phase I roadmap has been published, presenting the overarching science questions and the new instruments planned to address these questions. Phase II saw the release of the resulting "European Strategy for Astroparticle Physics" in September 2008, prioritising the projects under consideration. In this roadmap, CTA emerges as a near-term high-priority project. The roadmap states: The priority project for VHE gamma-ray astrophysics is the Cherenkov Telescope Array, CTA. We recommend design and prototyping of CTA, the selection of sites, and proceeding rapidly towards start of deployment in 2012. CTA was one of the two projects targeted by the 2009 ASPERA Common Call for cross-national funding and received in total 2.7 M€ from national funding agencies. The ASTRONET Eranet ASTRONET was created by a group of European funding agencies to establish comprehensive long-term planning for the development of European astronomy. The objective of this effort is to consolidate and reinforce the world-leading position that European astronomy attained at the beginning of this century. Late in 2008, ASTRONET released "The ASTRONET Infrastructure Roadmap: A Strategic Plan for European Astronomy". CTA is one of the three medium-scale facilities recommended on this roadmap, together with the neutrino telescope KM3Net and the solar telescope EST. 7.2 CTA in the world-wide context Ground-based gamma-ray astronomy has attracted considerable attention world-wide, and while CTA is the key project in Europe, other projects have been considered elsewhere. These include primarily: The Advanced Gamma-ray Imaging System (AGIS) In both science an instrumentation, AGIS (http://www.agis-observatory.org/) followed a very similar plan to that of CTA. The AGIS project was presented in a White Paper prepared for the Division of Astrophysics of the American Physical Society [8]. AGIS proposed a square-kilometre array of mid-sized telescopes, similar to the core array of mid-sized telescopes in CTA but without the additional large telescopes to cover the very lowest energies, and an extended array of small telescopes to provide large detection area at the very highest energies. The baseline configuration of AGIS consisted of 36 two-mirror Schwarzschild-Couder telescopes with an 11.5 m diameter primary mirror. These have a large field of view and a very good angular resolution. Close contacts were established between AGIS and CTA, during the design study phase; information was openly exchanged and common developments undertaken. After a US review panel recommended that AGIS join forces with CTA, the US members of the AGIS Collaboration have joined CTA in spring 2010. Within the overall context of CTA, development of Schwarzschild-Couder telescopes will be continued to investigate their potential for further improving CTA performance. Significant intellectual, technological and financial contributions to CTA from the US groups are anticipated. Strong US participation in CTA was endorsed by PASAG4 and the Decadal Survey in Astronomy and Astrophysics (Astro-2010). The High-Altitude Water-Cherenkov Experiment (HAWC) HAWC (http://hawc.umd.edu/) builds on the technique developed by the MILAGRO group, which detects shower particles on the ground using water Cherenkov detectors, and reconstructs the shower direction using timing information. It is proposed to construct the new detector on a site at 4,100 m a.s.l. in the Sierra Negra, Mexico. HAWC will provide a tenfold increase in sensitivity over MILAGRO and detection capability down to the lower energy of 100 GeV, largely due to its increased altitude. While it will have lower sensitivity, poorer angular resolution and a higher energy threshold compared to CTA, HAWC has the advantage of a large field of view (≈ 2π sr) and nearly 100% duty cycle. HAWC therefore complements imaging Cherenkov instruments. In fact, it would be desirable to construct and operate a similar instrument in the southern hemisphere, co-located with CTA. The Large High Altitude Air Shower Observatory (LHAASO) LHAASO is an extensive (km2) cosmic ray experiment. The proposal is to locate this near the site of the ARGO and AS-Gamma experiments in Tibet, at 4,300 m a.s.l. The array includes large-scale water Cherenkov detectors (90,000 m2), ground scintillation counter arrays for detecting both muons and electromagnetic particles, fluorescence/Cherenkov telescope arrays and a shower core detector array. The science goals encompass a survey of gamma-ray sources in the energy range ≥100 GeV, measurement of gamma-ray energy spectra of sources above 30 TeV to identify cosmic ray sources, and the measurement of cosmic ray spectra and composition at energies above 30 TeV. If realised, LHASSO will complement the northern CTA array, as it concentrates primarily on the detection of low-energy gamma-rays in the energy range from a few times 10 GeV to some 100 GeV. In summary, the other large-scale instruments for ground-based gamma-ray astronomy that are being discussed outside Europe (e.g. HAWC, LHAASO), are complementary to CTA in their capabilities. 7.3 Operation of CTA as an open observatory CTA is to address a wide range of astroparticle physics and astrophysics questions. The majority of studies will be based on observations of specific astronomical sources. The scientific programme will hence be steered by proposals to conduct measurements of specific objects. CTA will be operated as an open observatory. Beyond a base programme, which will include for example a survey of the Galaxy and deep observations of "legacy sources", observations will be conducted according to observing proposals selected for scientific excellence by peer-review among suggestions received from the community. Following the general procedures developed for and by other major astrophysical facilities, a substantial number of outstanding proposals from scientists working in institutions outside the CTA-supporting countries will be executed. All data obtained by the CTA will be made available in an archive that is accessible to scientists outside the proposing team. Following the experience of currently operating Cherenkov telescope observatories, the actual observations will normally be conducted over an extended period in time, with several different projects being scheduled each night. The operation of the array will be fairly complex. CTA observations will not, therefore, be conducted by the scientists whose individual proposals were selected, but by a dedicated team of operators. CTA observatory operation involves proposal handling and evaluation, managing observation and data-flow, and maintenance. The actual work may be conducted in a central location or in decentralised units (e.g. a data centre and an operations centre) with a coordinating office. 7.3.1 Observatory logistics The main logistic elements of the CTA observatory are: the Science Operation Centre (SOC), which is in charge of the organisation of observations; the Array Operation Centre (AOC), which looks after the operation and monitoring of the telescopes, and the Science Data Centre (SDC), which provides and disseminates data and analysis software to the science community at large, and using the standards of the International Virtual Observatory Alliance (see Fig. 11). Work flow diagram of the CTA observatory. The three main elements which guarantee the functionalities of the observatory are the Science Operation Centre, the Array Operation Centre and the Data Centre. Data handling and dissemination will build on existing infrastructures, such as EGEE and GÉANT The use of existing infrastructures, such as EGEE and GÉANT, and the use of a Virtual Observatory is recommended for all data management tasks in the three elements of the CTA observatory. The high data rate of CTA, together with the large computing power required for data analysis, demand dedicated resources. Hence, EGEE-Grid infrastructures and middleware for distributed data storage, analysis and data access are considered the most efficient solution for CTA. The CTA observatories will very probably be placed in remote locations in southern Africa, Latin or Central America, and/or the Canary Islands. Thus, high-bandwidth networking is critical for remote diagnostics and instant transfer of the data to well-connected European data centres. As for other projects in astronomy, a CTA Virtual Organisation, will provide access to the data. CTA aims to support a wide scientific community, providing access to all levels of data that is archived in a standardised way. It is envisaged to start CTA operations already during the construction phase as soon as the first telescopes are ready to conduct competitive science operations. 7.3.2 Proposal handling The world-wide community of scientists actively exploiting the results from ground-based VHE gamma-ray experiments currently consists of about 600 physicists (about 150 in each of the H.E.S.S. and MAGIC Collaborations, about 100 in VERITAS, 50 in Cangaroo and 50 in Indian gamma ray activities, plus about 100 scientists either associated, or regularly collaborating, with these experiments). Planning and designing CTA involves about another 100 scientists not currently participating in either of the currently running experiments. Proposals for observations with CTA are hence expected to serve a community of at least 700 scientists, larger than that of any national astronomical facility in Europe, and comparable to the size of the community using the ESO observatory in the 1980s. CTA must therefore efficiently deal with a large number of proposals for a facility which, based on experience with current experiments, is expected to be oversubscribed by a large factor. CTA plans to follow the practice of other major, successful observatories (e.g. ESO), and announce calls for proposals at regular intervals. These proposals will be peer-reviewed by a group of international experts which will change on a regular basis. Different classes of proposals (targeted, surveys, time-critical, target of opportunity, and regular programmes) are foreseen, as is common for current experiments and other ground-based observatories. Depending on the science under investigation, subarray operation may be required. Each site may therefore be conducting several different observation programmes concurrently. 7.3.3 Observatory operations The observing programme of the CTA will be driven by the best proposals from the scientific community, which will be selected in a peer-review process. Successful applicants will provide all the information required for the optimum completion of their measurements. An observing programme will be compiled by the operations centre, taking the requirements of individual projects into account. The programme will be conducted in robotic fashion with a minimum amount of professional staff on site. Proposers are not expected to participate in measurements. Quicklook analysis will enable triggers and on-the-fly modification of projects, if required. Data and calibration files will be provided to the user. Frequent modifications to the scheduled observing programme can be expected for several reasons. Openness of triggers is essential given the transitory and variable nature of many of the phenomena to be studied by CTA. CTA must adapt its schedule to changing atmospheric conditions to ensure the science programme is optimised. The flexibility to pursue several potentially very different programmes at the same time may increase the productivity of the CTA observatory. Routine calibrations and monitoring of the array and of environmental data must be scheduled as needed to ensure the required data quality. Observatory operations covers day-to-day use of the arrays, including measurements and continuous hardware and software maintenance, proposal handling and evaluation, automated analysis and user support, as well as the long-term programme for upgrades and improvements to ensure continued competitiveness over the lifetime of the observatory. 7.3.4 Data dissemination The measurements made with CTA will be subject to on-line analysis, including event-selection and calibration for instrumental effects. The analysis of data obtained with Cherenkov telescopes differs from the procedures typical in other wavelength ranges in that extended Monte-Carlo simulations are used to determine the effects of, and calibrate for, the influence of a large range of factors on the measurements. The necessary simulations will be carried out by CTA, used in calibrating standard pipline-processed data and will also be made available to the community for use in proposal planning etc. The principal investigators of accepted proposals will be provided with the results of standard processing and access to the standard MC simulations and the analysis pipelines used in data processing. Storage of data and archiving of scientific and calibration data, programs, and MC simulations used in the processing will be organised through the distributed computing resources made available in support of the CTA EGEE Virtual Organisation. The processing of CTA data represents a major computational challenge. It will be necessary to reduce a volume of typically 10 TBytes of raw data per observation to a few tens of MBytes of high-level data within a couple of hours. This first-level data processing will make heavily use of Grid technology by running hundreds of processes within a global pipeline. Data processing requires also the production and analysis of the MC simulations needed for calibration. The integrated services and infrastructures dedicated to the MC production, analysis and dissemination have to be taken into account in the CTA data pipeline. All levels of data will be archived in a standardised way, to allow access and re-processing by the scientific community. Access to all levels of data and Grid infrastructures will be provided through a single access point, the "VHE gamma-ray Science Gateway". Figure 12 shows an overview of the integrated application e-infrastructures such as EGEE-Grid, GÉANT and CTA VO. Schematic of the integrated application of e-infrastructures like EGEE-GRID, GÉANT and VO for the CTA observatory, together with the 2009 status of the CTACG (CTA Computing Grid) project (http://lappwiki01.in2p3.fr/CTA-FR/doku.php?id=cta-computing-grid-public). The VO-CTA Grid Operation Centre houses the EGEE services It is foreseen that the high level analysis of CTA data can be conducted by individual scientists using the analysis software made available by CTA. This software will follow the standards used by other high-energy observatories and will be provided free of charge to the scientific community. 7.4 CTA organisation The organisation of the CTA consortium will evolve over the various stages of the project. These include: The design study phase. Definition of the layout of the arrays, specification of the telescope types, design of the telescopes and small-scale prototyping. The prototyping and preparatory phase. Prototyping and deployment of full-scale telescopes, preparation of the construction and installation including solving technical, organisational and legal issues, site preparation. Construction phase. Construction, deployment and commissioning of the telescopes. Operation Phase. Operation as an open observatory, with calls for proposals and scheduling, operation and maintenance of the facility, processing of the data and provision of analysis tools. For the design study phase, the organisation of the consortium was defined in a Memorandum of Understanding modelled on those proven by large experiments in particle and astroparticle physics. The governing body is the Consortium Board and operational decisions are taken and work is coordinated by the Spokespersons and the Executive Board. Work Package Convenors organise and drive the work on essential parts of the project. The work packages and the area they cover are: PHYS The astrophysics and astroparticle physics that will be studied using CTA. Development of simulations for optimisation of the array layout and analysis algorithms, and for performance studies. Evaluation of possible sites for CTA and infrastructure requirements. Design of telescope optics and mirror construction. Design of telescope structure and associated drive and control systems. Development of focal plane instrumentation. Design and development of the readout electronics and trigger. Development of atmospheric monitoring and calibration techniques and associated instrumentation. Development of observatory operation and access strategies. Studies of data handling, processing, management and data access. Quality assurance and risk assessment strategies. The CTA design study phase was organised in terms of scientific/technical topics, rather than in terms of telescope types, to ensure that, as far as possible, common technical solutions are employed across the array, maximising economies of scale and simplifying array operation. For the preparatory phase, the organisation will be adapted to the needs of the project. The Project Office will be extended, and work packages for each telescope type will be established to steer prototyping and preparations for construction. External advisors will assist in guiding and reviewing the project. A significant task for the preparatory phase will be the definition of the legal framework and governance structure of the CTA Collaboration and observatory. Different models exist, each of which has its own advantages and disadvantages. CTA could for example be realised within an existing international organisation such as CERN or ESO. CTA could also be operated by a large national laboratory which has sufficient administrative and technical infrastructure. Suitable national laboratories exist e.g. in Germany, France, or the UK, for example. On a smaller scale, H.E.S.S. and MAGIC are operated in this mode. CTA could be established as an independent legal entity under the national law of some country, following the example of IRAM. The definition of the legal structure of CTA will be determined in close interaction with ASPERA (a group of European Research Area funding agencies which coordinates astroparticle physics in Europe). One of their main tasks is the "Implementation of new European-wide procedures for large infrastructures". Regardless of the legal implementation, CTA management will be assisted by an international scientific and technical Advisory Board, and a Resource Board, composed of representatives of the national funding organisations supporting CTA. Close contacts between CTA and the funding agencies (via the Resource Board) during all stages of the project are vital to secure sufficient and timely funding for the construction of the facility. 7.5 Time schedule and costs CTA builds largely on proven technologies and Cherenkov telescopes of sizes similar to those needed for CTA have already been built or are in the advanced stages of construction. Remaining challenges are: (a) optimisation of the cost of telescope components; (b) improvement of the reliability of telescope components, requiring extensive prototyping; (c) establishment of the formal framework for building and operating the instrument, and the selection and provision of sites; and (d) the funding of the infrastructure. These challenges will be addressed during the Preparatory Phase (2010–2013) which will be supported by an FP7 grant of up to 5.2 M€ from the European Community and by grants from various national funding agencies. After a successful Preparatory Phase, and provided the funding has been secured, construction and deployment will then take from 2013 until 2018. A detailed evaluation of the required construction and running costs is part of the Preparatory Phase studies. Current design efforts are conducted within an envelope of investment costs for the CTA construction and site infrastructure of 100 M€ for the southern site, featuring full energy coverage, and 50 M€ for the more specialised northern site (all in 2005 €). CTA aims to keep running costs below 10% of the total investment, in line with typical running costs for other astrophysical facilities. Estimates for the costs of all major components of CTA are required for any optimisation of the array design. The current model makes the following assumptions: The investment required to construct CTA (according to European accounting schemes) is 100 M€ for CTA-South and 50 M€ for CTA-North. For both sites 20% of the budget is required for infrastructure and a central processing farm. Therefore, for example, telescope construction for CTA-South is anticipated to cost 80 M€. The construction of the telescope foundation, optical support structure, drive/safety system and camera masts will cost 450 k€ for a 12 m telescope and the cost scales as (dish area)1.35. Mirrors, mounts and actuators will cost ≈ 1.7 k€/m2. Camera mechanics, photo-sensor and electronics costs will be 400 €/pixel, including lightcones, support structures and cooling systems. Miscellaneous additional costs of about 20 k€/telescope will be incurred. This cost model will evolve as the design work on the different components of CTA progresses. 8 Monte Carlo simulations and layout studies The performance of an array of imaging atmospheric Cherenkov telescopes such as CTA depends on a large number of technical and design parameters. These include the general layout of the installation, with telescope sizes and locations, telescope optics, camera field-of-view and pixel size, signal shapes and trigger logic. In searching for the optimum configuration of a Cherenkov telescope array, one finds that most of these parameters are intimately related, either technically or through constraints on the total cost. For many of these parameters there is experience from previous gamma-ray installations such as HEGRA, CAT, H.E.S.S., and MAGIC that provide reasonable starting points for the optimisation of CTA parameters. Whilst the full optimisation of CTA has not yet been completed, extensive simulation studies have been performed and demonstrate that an array of ≥60 Cherenkov telescopes can achieve the key performance targets for CTA, within the cost envelope described earlier. This section gives a summary of the most important simulation studies performed so far. 8.1 Simulation tools Only a modest number of candidate configurations has been simulated in full detail during the design study, but this still required the simulation of close to 1011 proton, gamma, and electron induced showers, with full treatment of every interaction, tracking all the particles generated in these showers through the atmosphere, simulating emission of Cherenkov light, propagating the light down to the telescopes, reflecting it on multi-faceted mirrors, entering photomultiplier tubes, generating pulses in complex trigger electronics, and having them registered in analogue-to-digital circuits. Simulations include not only Cherenkov photons but also NSB light resulting in the registration of photons at rates of ∼100 MHz in a typical photo-sensor. Since the discrimination between γ-ray and hadron showers in CTA will surpass that of the best current instruments by a significant factor, huge numbers of background showers must be simulated before conclusions on the performance of a particular configuration can be drawn. Work is underway to reduce the CPU-time requirement by preferentially selecting proton showers early in their development if they are more likely to appear γ-like. This should lead to a substantial speed improvement in future studies. Early results from Toy models, which parametrize shower detection characteristics and are many orders of magnitude faster, are encouraging, but cannot yet be seen as adequate replacements for the detailed simulation process. The air-shower simulation results presented here are based on the CORSIKA program [73], which is widely used in the community and very well tested. Cross-checks with the KASCADE-C + + air-shower code [74] have been performed as part of this study. Simulations of the instrument response have been carried out with three codes. Two packages initially developed for H.E.S.S. (sim_telarray [75] and SMASH [76]), and one for MAGIC simulations [77], were cross-checked using an initial benchmark arrays configurations. The large volume of simulations, dominated by those of proton-induced showers needed for background estimations, has motivated the use of EGEE (Enabling Grids for E-sciencE) for the massive production of shower and detector simulations. A Virtual Organisation has been founded and a first set of CORSIKA showers has been generated on the GRID, while a specific interface for job submission and follow-up for simulations and analysis is currently under development. The detailed simulations described here, result in data equivalent to experimental raw data (ADC counts for each time-slice for each pixel). Analysis tools are needed to reconstruct shower parameters (in particular energy and direction) and to identify γ-ray showers against the background from hadron-initiated showers (note that the additional background from electron-induced showers is important at intermediate energies despite the much lower electron flux as electron showers are extremely difficult to differentiate from those initiated by photons). The analysis methods currently used are based on experience with past and current instruments, but are being developed to make full use of the information available for CTA, in particular to exploit the large number of shower images that CTA will provide for individual events. The analyses in this study are based on several independent codes, all of which start with cleaning of images to identify signal pixels, and a parametrisation of images by second-moment Hillas parameters [78], augmented by parameters such as the height of shower maximum as reconstructed from stereo images. Background rejection is achieved both by direct cuts on (suitably normalised) image parameters, and more general multivariate analysis tools such as a Random Forest [79] classifier and Boosted Decision Trees within the open source software package TMVA (http://tmva.sourceforge.net) [80, 81]. There are also other analysis methods in use for the analysis of Cherenkov telescope data, such as the 3-D-model analysis [82] the Model+ + analysis [68], and analytical combinations of probability density functions of discriminating variables which have advantages over the standard second-moments analysis in at least some energy ranges. Some of these alternative methods have been used for a subset of the studies presented here. 8.2 Verification of simulation tools The optimisation of CTA relies heavily on detailed simulations to predict signal and background rates, angular resolution and overall sensitivity. To demonstrate that the simulation tools in use accurately describe reality, we show here some key data/simulation comparisons, taking H.E.S.S. as an example. A key aspect of the simulation of the detector response to Cherenkov light from an air-shower is the ray-tracing of light through the optical system of an individual telescope. An understanding of the typical misalignments of all components is needed at this stage, as is the ideal performance. The optical performance of a telescope is described by its point spread function (PSF), which degrades for off-axis rays. Figure 13 illustrates that the modelling of the optical system of, in this case, a H.E.S.S. telescope reproduces the width and shape of the PSF in all details, and that essentially identical imaging is achieved for different telescopes in the system. Optical point spread function of two H.E.S.S. telescopes as a function of angle of incidence, measured using stars, and compared to simulations. Data points are shown for the radial and tangential width of the PSF, and the 80% containment radius. Lines represent the results of simulations of the telescope optics using sim_telarray. See [83] for details An end-to-end test of the correct simulation of gamma-ray induced showers can be made using the signal from a strong source under very high signal/background conditions. The giant flare from the blazar PKS 2155-304 observed with H.E.S.S. in 2006 provides an excellent opportunity for such a test. Figure 14 shows the satisfactory agreement (typically at the 5% level) between the simulated and detected shape of the shower image as characterised by their Hillas width and length parameters. Gamma-ray showers were simulated with the CORSIKA and KASKADE-C + + programs and have been passed through one of the H.E.S.S. detector simulation and analysis chains. The measured spectrum, optical efficiency, zenith angle and other runtime parameters were used as inputs to this simulation. Comparison of measured (black squares) and simulated (red triangles and blue circles) image parameters for the H.E.S.S. telescopes. Measured data are taken from a flare of the blazar PKS 2155-304 [84] for which the signal/noise ratio was very high and large gamma-ray statistics are available In the analysis of experimental data, it is sufficient for simulations to describe the characteristics of gamma-ray detection, since the cosmic-ray background can (except for very diffuse sources) be modelled and subtracted using measurements in regions without gamma-ray emission. However, for the design of new instruments, simulations must also provide a reliable modelling of all relevant backgrounds. Experience with existing systems shows that this is indeed possible, provided that background events are simulated over a very wide area, up to an impact distance of around a kilometre from any telescope and over a large solid angle, well beyond the direct field of view of the instrument, so that far off-axis shower particles are properly included. An inherent uncertainty in the simulation of the hadronic background is given by the currently limited knowledge of hadronic interaction processes at very high energies. The impact of this uncertainty on the Cherenkov light profile has been studied using CORSIKA simulations with different interaction models. As can be seen in Fig. 15, the low energy (<80 GeV) models FLUKA [85] and UrQMD [86] do not exhibit significant differences, whereas the known discrepancy between the high-energy models QGSJet-01 [87], QGSJet-II [88, 89] and SIBYLL 2.1 [90] leads to an uncertainty of about 5% in the Cherenkov light profile at 1 TeV. Comparison of the Cherenkov light profiles for proton-induced showers generated with different hadronic interaction models. The profiles for FLUKA and UrQMD at 50 GeV (left) and 100 GeV (right) are shown in the top panel. Two QGSJet versions and SIBYLL at 1 TeV are compared in the bottom panels As can be seen in Fig. 16, the raw cosmic-ray detection rate as a function of zenith angle is described to within about 20%. Given the uncertainties on cosmic-ray flux, composition above the atmosphere and in the hadronic interaction models, better agreement cannot be expected. In the background-limited regime this uncertainty corresponds to a 10% uncertainty in sensitivity, assuming that the fraction of γ-like events is understood. Figure 17 demonstrates that the fraction of such events, and the distributions of separation parameters, are indeed well understood for instruments such as H.E.S.S. using the simulation and analysis tools applied here to CTA. Dependence of H.E.S.S. system trigger rate on zenith angle, for data and simulations. The simulations assume two different model atmospheres, with the atmosphere at the H.E.S.S. site representing an intermediate case. See [72] for more details Measured distribution of the proton/electron separation parameter ζ for 239 hours of H.E.S.S. data on sky fields without gamma emission, compared to simulations of proton- and electron-induced showers. The shape of the background is very well reproduced by simulations across the full range of ζ. Gamma-ray signals appear close to ζ = 1. The electron background is therefore important despite the relatively low flux of electrons in comparison to hadrons. See [91] for more details 8.3 Energy range and sensitivity of telescope arrays Three methods of representing the sensitivity of a Cherenkov telescope are used in the following discussion. All three have merits and emphasise different features. The traditional way to represent the sensitivity of Cherenkov Telescope systems is in terms of integral sensitivity, including all events reconstructed above a given energy (and often multiplied by the threshold energy to flatten the curves and give more useful units of erg/(cm2s). An observation time of 50 hours (typical for the first generation of IACTs) is assumed for comparison to published sensitivity curves of historical and current instruments. Integral sensitivities depend on the assumed source spectrum and can be deceptive in that much of the detection power quoted for a given threshold may actually be derived from events well above that threshold. A more useful, but less common, way to represent the sensitivity of IACTs is in terms of differential sensitivity, where a significant detection (above 5% of the background level, with ≥ 5 σ statistical significance and at least ten events) is required in each energy bin. Five bins per decade in energy are used for the following results for possible CTA configurations. The differential flux sensitivity is sometimes multiplied by E 2 to show the minimum source flux in terms of power per logarithmic frequency interval and given in units of erg cm − 2s − 1 for ease of comparison with other wavebands. Alternatively, the Crab nebula, as a strong and non-variable gamma-ray source with a rather typical spectral shape, can be used as a reference. Here we use the VHE spectrum as measured with the HEGRA telescope array as a reference, i.e. 1 Crab Unit (CU) = 2.79 × 10 − 11 E − 2.57 cm − 2 s − 1 TeV − 1. (Note that the true spectrum of the Crab nebula falls below this expression at the highest and lowest energies.) Several different telescope configurations have been investigated in simulation studies for CTA so far. The first simulations were used to cross-check the different simulation packages and to begin the investigation of the dependence of performance on telescope and array parameters. Selected results from one of these, an array of nine telescopes with 24 m diameter (the "benchmark" array), are discussed below. Following these studies a series of simulations were conducted with larger telescope arrays (including 41× 12 m telescopes and a 97-telescope array with two different telescope sizes) to demonstrate that the goals of CTA are attainable with a large telescope array (see [92]). More recently, a 275 telescope "production configuration" has been simulated, subsets of which constitute CTA candidate configurations. So far 11 candidate configurations have been defined with an approximately equal construction cost of about 80 M€ (in 2005 €) with the current CTA cost model. The evaluation of the performance of these candidate arrays is a first step towards the optimisation of the CTA design. Figure 18 shows some of the telescope layouts used. All systems assume conventional technology for mirrors, PMTs and read-out electronics. Standard analysis techniques are used in general, with the results from more sophisticated methods shown for comparison in specific cases. Top 275 telescope super-configuration for the MC mass production. Five telescope types are simulated (red: 24 m diameter telescopes, black and green: 12 m, pink: 10 m, blue: 7 m), with the circle size proportional to the mirror area. Bottom Three example candidate configurations (B, C and E) which are subsets of the 275 telescope array and would all have an approximate construction cost of 80 M€ The nine-telescope benchmark array has been used to test several aspects of array performance, in particular the desirable altitude range and best pixel size for the lower part of the CTA energy range. Figure 19 compares arrays located at different elevations (2,000, 3,500 and 5,000 m) and also illustrates the influence of systematic errors in the background determination at low energies. The spacing of telescopes is adjusted to compensate for the changing radius of the Cherenkov light-pool with altitude. For 2,000 m elevation, the array has useful sensitivity above ≈20 GeV and at higher energies dips below the 1% Crab level. An equivalent system at high elevation (5,000 m) provides a lower threshold but worse performance at high energies, at least partly reflecting the smaller diameter of the light pool at high altitude and hence the reduced detection area. Another potential problem at very high altitudes is the contamination of the signal by Cherenkov light from individual shower particles which reach the observation level. Sensitivities cross at about 30 GeV, implying that a high-altitude installation is mainly relevant for specialised very-low-energy instruments, such as the 5@5 array [93]. Similar conclusions were reached in earlier simulations by Plyasheshnikov (private communication) and Konopelko [94]. A 3,500 m altitude array delivers a somewhat lower energy threshold than one at 2000 m and comparable performance at 0.1–1 TeV for the benchmark array. However, it is not clear that this result on relative performance at intermediate energies can be generalised to the much larger telescope array of smaller telescopes with which CTA plans to cover this energy range. Simulations of the 275-telescope array at 3,700 m altitude are underway to address this question. Differential sensitivity (with five independent bins per decade in energy) of the nine-telescope benchmark array placed at 2,000, 3,500 and at 5,000 m elevation, for point sources observed for 50 h at a zenith angle of 20°. A 5σ significance, at least ten signal events, and a signal exceeding 5% of the remaining background is required for a detection. The image cleaning method applied uses dual threshold 5/10 photoelectron Figure 20 shows the impact of changing the (angular) pixel diameter (Θ p ) on the sensitivity of the benchmark array at 2,000 m altitude. It can be seen that only modest improvements are possible with pixels below 0.1° diameter. As the camera cost increases as \(1/\Theta_{p}^{2}\), smaller pixels sizes are strongly disfavoured. The improvement of angular resolution at smaller pixel size is also found to be modest in our studies (see also [95]). Alternative analyses may lead to significant benefits from smaller pixel sizes, but this has not yet been demonstrated. Differential sensitivity curves for the nine-telescope benchmark array for several different pixel sizes using the same criteria as for the previous figure. Image cleaning is adapted to the respective noise levels in each case. The impact of reduced pixel size is mainly visible close to the threshold energy The 275-telescope production configuration described above is the focus of the current work within CTA and has been used to demonstrate the validity of the CTA concept. Figure 21 shows some example events as seen in a candidate sub-configuration of this production array, demonstrating the high telescope multiplicity (and event quality) which is a key element of the CTA design. Three events as seen by the 59-telescope candidate array E. The gamma-ray energy and number of images seen are shown in each instance. The left-hand plots show the telescopes on the ground (the three sizes of circles for the telescopes of diameters 7, 12 and 24 m, respectively), with projected Hillas ellipses drawn relative to each telescope position for each triggered telescope. Higher amplitude images are filled with darker grey. The point of intersection of the primary trajectory with the ground is marked with a star. It is found in a simultaneous fit of both core and direction. The truncation of images at large impact distances is clearly visible. The right-hand plots shows the same ellipses in the camera plane, with the gamma-ray source position marked with a star. (In the most rudimentary analysis one can reconstruct the impact point on ground by the intersection of the directions from image centroids to each of their telescope positions (dotted lines on the left), and the gamma-ray direction in the sky from the intersection of the image axes (right)) Figure 22 shows how the angular resolution defined as the 68% containment radius, improves with the number of telescopes that record a shower image. With four images (as for instruments like H.E.S.S. or VERITAS) a resolution of about 0.1° is reached, while with ≥12 images the resolution is ≤0.05°. For the most energetic showers, resolutions of <0.02° are reached. Analogous simulations for AGIS [96] give a very similar angular resolution. The telescopes simulated include one type of 12 m diameter, 8° field-of-view and 0.18° pixels (squares in Fig. 21, used in configurations B, C and E), one type of 7 m diameter, 10° field-of-view and 0.25° pixels (triangles in Fig. 21, used in configuration E) and a 24-m telescope type with 5° field-of-view and 0.09° pixels (circles in Fig. 21, used in configurations B and E). The 24-m telescopes use parabolic optics, all other telescopes are based on the Davies–Cotton design. Optical designs intermediate between parabolic and Davies–Cotton are now under consideration to optimise the trade-off between time-dispersion and off-axis performance. For the cameras, a quantum efficiency curve of similar spectral shape (blue-sensitive) to that of current bi-alkali PMTs is assumed. This is a conservative assumption as ∼50% higher efficiency cathodes have recently been announced by several major manufacturers (albeit with larger after-pulsing rates, which may limit the advantage gained in terms of trigger threshold). Angular resolution (68% containment radius) for array configuration E, as a function of the number of telescopes with good shower images Figure 23 illustrates the integral flux sensitivity achieved with the three candidate CTA configurations shown above. The goal sensitivity curve for CTA is shown for comparison. It can be seen that these configurations (even with rather basic analysis methods) are close to achieving the goal performance in most energy ranges. At very high energies it seems to be possible to exceed the original goal performance by a significant factor within the nominal project budget. As the three configurations B, C and E have roughly equal cost, they can be used to show the impact of changing the energy emphasis of the observatory on the performance achieved. Configuration C covers a very large area (∼5 km2) but lacks any telescopes larger than 12 m and hence has very little sensitivity below 100 GeV. Configuration B has a low-energy core of 24-m telescopes surrounded by a closely spaced 12-m telescope array. This configuration provides superior hadron rejection and angular resolution (see later) but provides a more modest effective collection area at multi-TeV energies. Configuration E is a compromise array, which attempts to do well in all energy ranges using multiple telescope types and spacings. As can be seen from Fig. 23, such an array comes closest to achieving the CTA performance goals. Integral sensitivity (multiplied by E) for the candidate configurations B, C and E, for point sources observed for 50 h at a zenith angle of 20°. The goal curve for CTA (dashed line) is shown for comparison It is important to study the potential sensitivity of CTA at much shorter observation times than the 50 h used for reference. Figure 24 shows how the sensitivity changes for 5-h and 0.5-h observations. The sensitivity scales linearly with time t in the regime limited by gamma-ray statistics and approximately with \(\sqrt{t}\) in the background limited regime at lower energies. For candidate array E, the detection of a source with 2% of the Crab Nebula flux (the flux level of the weakest known sources of VHE gamma-rays until 2007) would be possible in just over 30 min. Extreme AGN outbursts, which in the past have reached flux levels >10× the Crab flux, could be studied with a time resolution of seconds, under virtually background-free conditions. Figure 24 also shows 50-h sensitivity curves calculated using two independent analyses, illustrating (a) that the conclusions on sensitivity presented here are robust and (b) that the sensitivity can be improved using more advanced methods for background suppression over much of the CTA energy range. Time and energy dependence of the differential sensitivity (for five independent measurements per decade in energy, multiplied by E 2) for configuration E. Exposure times of 0.5, 5 and 50 h are shown. Selection cuts were optimised separately for each exposure time. For the 50-h curve two alternative analysis methods are also shown. The red curve is for an analysis procedure with an image cleaning procedure and a Random Forest-based method for hadron rejection. An independent analysis using TMVA for hadron rejection is shown as a blue curve The angular resolution for the CTA candidate systems is summarised in Fig. 25. Resolution at 1 TeV is in the 0.04–0.05° range for configurations B and E, and somewhat worse for the larger area configuration C, illustrating the trade-off between collection area and precision at fixed cost. A simultaneous minimisation to find the best shower core and direction, using pixel timing information, provides a significant improvement over the traditional intersection of image axes technique (see dashed line in Fig. 25). The resolution approaches 1 min − 1 at high energies. Fiducial cuts on core location and/or harder telescope multiplicity cuts improve this performance, at the expense of collection area. Angular resolution (68% containment radius of the gamma-ray PSF) versus energy for the candidate configurations B, C and E. The resolution for a more sophisticated shower axis reconstruction method for configuration E is shown for comparison (dashed red line—E*). The angular resolution of H.E.S.S. (basic Hillas analysis, standard cuts) is shown as a reference [97] The energy resolution (for photon showers) as a function of energy is shown in Fig. 26 for the candidate arrays B, C and E. The energy resolution is below 30% in almost the whole range of interest and ≤10% above about 1 TeV. Energy resolution versus energy for the candidate configurations B, C and E In summary, whilst the final optimisation of the CTA design will require accurate cost models and input from quantitative "key science projects", it is clear from our current studies that an array of ∼60 wide field of view Cherenkov telescopes can achieve the key performance goals of CTA within the envisaged level of investment. 9 CTA telescope technology A particular size of Cherenkov telescope is only optimal for covering about 1.5–2 decades in energy. Three sizes of telescope are therefore needed to cover the large energy range CTA proposes to study (from a few tens of GeV to above 100 TeV). The current baseline design consists of three single-mirror telescopes: SST: Small size telescopes of 5–8 m diameter; MST: Medium size telescopes of 10–12 m diameter; and LST: Large size telescopes of 20–30 m diameter. While telescope optics involving multiple reflectors or optical correctors have been proposed [66, 98, 99] and do provide improved and more uniform imaging across large fields of view, these designs are also more complicated than the classical single-reflector Cherenkov telescopes. Single-reflector designs are adequate for the fields of view necessary for CTA and provide a PSF well-matched to the proposed PMT-based camera. Imaging is improved by choosing relatively large f/d values, in the range of 1.2–1.5. A second variable is the dish shape: a Davies–Cotton layout provides good imaging over wide fields, but introduce a time dispersion. For small dish diameters this dispersion is smaller than the intrinsic width of the photon distribution, and therefore insignificant. For large dish diameters, the difference in photon path length from different parts of the reflector becomes larger than the intrinsic spread of photon arrival times, broadening the light pulse. A parabolic shape, which does not introduce this dispersion, is therefore preferred for very large telescopes. The transition between the two regimes is at about the size of the MST. Other alternative dish shapes face the same general trade-off between time dispersion and imaging quality. 9.1 Telescope mount and dish One of the most important mechanical components of a telescope is the mount, with its associated drive systems. This must allow the slewing of the dish and the tracking of celestial objects. The dish structure supports the segmented reflector and the camera support which holds the camera at the focus on the reflector. Critical properties for the structural components of a telescope include: Positioning of mirror facets. The dish structure supports mirror facets forming a parabolic or Davies–Cotton reflector. Its prime task is to keep the relative orientation of the mirror facets stable at the arcminute level. Mechanical stability of the optical system. Stability must be achieved under observing and "survival" conditions. Typical camera pixel sizes are 5–10′. To achieve a stable focus, independent of pointing, modest wind loads and temperature variations, mirror facets have to be kept stable to well below 1′, either by a suitably stiff structure and/or by active mirror attitude control. Survival conditions refer to high wind and snow loads, which the telescope must tolerate without suffering damage. Pointing and tracking precision. The effective optical pointing of a telescope, i.e. the location of images on the camera, is determined by the precision of the tracking system, the overall deformations of the dish and the deformations of the camera support. Given the extremely short exposure times (ns), the pointing does not need to be stable or precise to more than a few arcminutes, provided that the effective pointing is monitored with sufficient precision. Slewing speed. A slewing speed that allows repointing to any location in the sky within a minute is normally sufficient, given that objects are usually tracked over tens of minutes before repositioning. Only for one special class of targets, the GRB alert follow-ups, is the fastest possible slewing desirable. Faster slewing of 180° in 20 s is planned for the large-sized telescopes, which are most suited for such follow-ups, given their low energy threshold. Efficiency of construction, transport, and installation. This is a key factor in reducing costs. For mass production of telescopes, it may be most efficient to set up a factory for assembly of structural components at the instrument site, avoiding shipment of large parts and minimising tooling. Minimal maintenance requirements. Reducing on-site maintenance to a minimum aids high efficiency operation and minimises the requirements for on-site technical staff. Safety considerations. All procedures for installation and maintenance have to ensure a high level of safety for workers. The telescopes must also be constructed so that even in the case of failures of the drive systems or power they can be returned to their parking positions. 9.1.1 Mounting system and drives While some of the very first Cherenkov telescopes were equipped with equatorial mounts, alt-azimuth mounts offer obvious advantages and have been adopted for all modern instruments. Two main types of mounts are in use (Fig. 27): Circular rail system for azimuthal motion, supporting the dish between two elevation towers, as is used by H.E.S.S. and MAGIC. The elevation axis is positioned such that the dish is balanced and little or no counterweight is required. This support scheme will in general permit a large movement range in elevation, allowing the positioning of the camera near ground level for easy access, and the tracking of sources which go through the zenith without repositioning by 180° in azimuth. A disadvantage of a rail system is the considerable on-site effort required: a large ring foundation must be constructed, the azimuth rail needs to be carefully levelled, and drive systems have to be mounted and cabled on-site. The central positioner as used by VERITAS, in which the dish is supported from near its center in the back. The central positioner construction is often used for radio and radar antennae and mirrors for solar power concentrators. The construction of the foundation is considerably simplified and the on-site installation work reduced which can be of importance at sites with poor access or difficult terrain. In addition, maintenance tends to be simplified since all bearings and drive components are contained and protected within a compact positioner unit, as opposed to rails and wheels which are more exposed. While these advantages make the choice obvious for antennae and solar concentrators, for which focal plane instrumentation is generally of low weight and f/d is normally very short, the trend for Cherenkov telescopes is now towards large f/d ratios, well above 1, to provide improved image quality. More and more components are also being installed in the camera, resulting in increased weight. Large counterweights are then required to balance the elevation axis in the central positioner design, as is visible in the VERITAS case. Without these counterweights, the elevation mechanism has to handle large torques and the desired positioning speeds require much larger drive power than needed for balanced systems. Access to the camera at ground level is also possible in these designs if one locates the elevation axis away from the centre of the tower. Examples of alt-azimuth mount, as used for H.E.S.S. and MAGIC (left) and a central positioner design, used for the Whipple and VERITAS telescopes (right) Alternative mounting schemes have been considered. For example, a hexpod mount was investigated for the H.E.S.S. II telescope, but was abandoned as the initially assumed cost advantages over conventional mounts turned out to be marginal due to the complexity of the hydraulic drive system and the extensive safety features required. In addition, a hexpod mount requires a mirror cover during day time, when the dish is parked facing up. Camera access is also non-trivial. Another unconventional mounting scheme is a lift-up mirror carried on a circular rail, which eliminates the elevation towers and, at least in some dish support schemes, allows the reduction of bending torques on the dish due to the camera support system. A conceptual design for such a scheme was worked out for H.E.S.S. (see Fig. 28, left), but again did not offer cost advantages. With a different elevation mechanism, this support scheme has been considered for the medium-sized CTA telescope (Fig. 28, right). A drawback of such systems is that the centre of gravity moves as the telescope's elevation is changed, requiring significantly increased drive power compared to balanced systems, where the drives only have to counteract friction, inertia, and certain wind loads. Alternative alt-azimuth mounts, eliminating the elevation towers, as studied for H.E.S.S. (left) and CTA (right) For the LST, only a rail design, as used by H.E.S.S. and MAGIC, appears feasible. This is also a possible solution for the MST, although here a central positioner is a viable option. The solution chosen for the mount has significant influence on the dish design. When a rail mount is used, the dish is supported either at its circumference, requiring a stiff dish envelope, or via an extra elevation cradle as used in the H.E.S.S. II telescope. With a central positioner, the dish is supported from its centre, and loads at the periphery of the dish must be minimised. For the SST with its reduced weight and loads, it appears cost effective to use a central positioner type mount as illustrated in Fig. 29(left) or to support the telescope by elevation towers but replace the rail by a central azimuth bearing as is used in the HEGRA telescopes (Fig. 29, right). Two options for the SST mount: a central positioner (left) or a HEGRA-type support (right) Various types of drive systems are implemented in current telescopes. The experience gained with these will inform the CTA designs. Some central positioners can be purchased as commercial units and others are under development with industrial partners. The main challenge is the large torque that must be transmitted by a rather compact unit, resulting in high forces on gears and bearings. Dual counter-acting drive units are unavoidable to compensate for play. For rail-based mounts, azimuth drive systems are used, e.g. friction drives (H.E.S.S. I), multiple driven wheels (H.E.S.S. II) and rack-and-pinion drives, implemented using a chain (MAGIC). For the elevation drive of the LST, a rack-and-pinion system is being considered, again with the option of using a chain. For the SST and the MST, directly driven elevation axes are an option. Commercial servo systems will be used to control the drive motors, with multiple feedback loops: for example, H.E.S.S. II e.g., uses an inner feedback loop to control motor speed and/or torque, implemented in the servo controller, an intermediate fast software-based feedback loop implemented in a local controller to control axis motion and to balance multiple drive motors acting on an axis, and an outer slower software-based feedback loop for absolute positioning and tracking, based on absolute shaft encoders. Relatively low-cost encoders provide a precision of ≤10′′. At this level, pointing precision is usually dominated by deformations of the dish and of the camera support, causing deviations of the effective optical axis from the nominal pointing monitored by the encoders. Pointing can be corrected by a combination of lookup-based corrections of elastic deformations, star guider CCD cameras monitoring the actual orientation of the dish, and CCD cameras monitoring the position and orientation of the focal plane instrumentation relative to the dish axis. Using a combination of such measures allows an (off-line) pointing accuracy of about 10′′ to be achieved. 9.1.2 Dish structure and camera support The dish structures of the LST that is currently planned, has a space frame similar to that used in different variants in the H.E.S.S., MAGIC and VERITAS telescopes (Fig. 30). A designs with only a minimal space frame is favoured for the dish of the MST. Another option is a relatively coarse space frame with an additional structure to provide mirror attachment points. Alternatively, one can use a highly resolved space frame, based e.g. on tetrahedron structures, where each mirror support point forms a node of the space frame (Fig. 31). The final choice will depend on structural stability, cost and efficiency of production. Stiffness requirements will depend on whether active mirror alignment is employed to partly compensate for dish deformations. This option is particularly interesting for the LST. Examples of the space-frame construction. H.E.S.S. steel space-frame (left) and the MAGIC three-layers CFRP space-frame (right) Sketch of the triangular space frame top layer with hexagonal mirror elements (blue lines). The mirror supports points (green circle) are fixed close to the space frame corners The materials primarily used for the telescope structures are steel, aluminium and, more recently, carbon fibre reinforced plastic (CFRP). All have their advantages and drawbacks, particularly when building many telescopes at remote sites: Steel is the most commonly used material for past constructions, such as H.E.S.S. and VERITAS. It is generally the cheapest material, but results in rather heavy constructions. Nearly everywhere in the world expertise in steel fabrication and construction can be found. Aluminium is less heavy than steel and has a higher specific Young's modulus, but it has the largest thermal expansion of all three materials considered here. CFRP is the strongest of the three materials and has the lowest weight, but it is the most expensive. It undergoes very little thermal expansion and is better as regards oscillation damping than the other materials, but connecting different elements is more difficult. This drawback might be overcome by an appropriate design, for example by use of composite-composite instead of metal-composite connections. CFRP is used in the MAGIC telescopes, to minimise their weight and moment of inertia to allow the maximum possible slewing speed. 9.1.3 Current baseline designs For the MST and LST, the mechanically most complicated and costly structures, as well as for the SST, the following designs have emerged as baseline options (with other options still being pursued in parallel): The general belief within the consortium is that the MST will become the workhorse of the CTA observatory. This implies that quite a number of telescopes will be built. Simplicity, robustness, reliability and the ease to maintenance are therefore particularly important features. This led to the decision to build an early prototype. MC studies suggest that an f/d of around 1.4 and a FoV of about 8° is required. Three groups within the consortium have developed their designs (Figs. 32 and 33). Left Putting the telescope into a pit reduces the height of the telescope. Right A CFRP dish on a steel mount. In both cases the dish is held at the edge and the azimuthal movement is realised by rails This design makes use of a positioner for the movement around the azimuthal and elevation axis The main idea in the first design was to have the elevation axis close to ground level. This solution saves on the construction of elevation towers, but at the expense of a pit into which the lower half of the dish disappears when the telescope is parked with the camera at ground level (Fig. 32, left). The same team is working on a design that decouples the dish movement from the camera elevation. The second design was based on a light and stiff dish, which consists solely of CFRP and is designed in a way that avoids CFRP joints to metal (Fig. 32, right). This design allows easy access to the camera and mirrors. For the elevation, two options were foreseen, a lift-up system and a more conventional swing-like mount. The third design started from a mirror layout and a structural analysis. Two design options were considered: one has similarity with the H.E.S.S. I telescopes, the other with VERITAS. The second option with the central positioner has been worked out in more detail as this design simplifies of construction and reduces costs substantially (Fig. 33). A discussion between the three different design groups has started and has led so far to the use of the CFRP camera structure of the second design in the third design. All three designs are judged to be technically feasible, as a consequence of which the costs will be the major criterion of choice. After the decision on the design, a prototype will be constructed, probably next to an institute and not at the experimental site. The main aim of this prototype will be the optimisation and simplification of the instrument with respect to construction and maintenance. In parallel with the prototyping of the single-mirror MST, the design of a Schwarzschild–Couder telescope for AGIS has progressed (see Fig. 34) and work towards prototyping of components and ultimately a full MST-SC prototype is underway in the US. Model of an AGIS Schwarzschild–Couder telescope and its two-mirror aplanatic optical system (from [96]) For the LST, the current baseline design consists of a parabolic dish of 23 m diameter with f/d = 1.2 constructed using carbon fibre structure (an enlarged derivative of the proven MAGIC design). The goal is to keep the total weight around 50 t (Fig. 35). The dish uses a 3- or 4-layer space frame, based on triangular elements, with hexagonal mirrors supported from some of the nodes of the space frame. The dish is supported by an alt-azimuth mount moving on 6 bogeys along a circular rail. Conceptual layout of the LST. The dish has a diameter of 23 m For the SST, the mechanical design is less complex and several therefore timescales are somewhat more relaxed. Several options are still under study. The large FoV that is essential for the SST results, for single-mirror designs, in a relatively large camera with high costs. In comparison, the structure of the small telescope is cheap. This large misbalance makes it sensible to investigate an SST with a secondary optics which can potentially significantly reduce the camera cost, at the price of a more expensive mechanical structure. Whether this results is an overall saving is currently investigated. A possible design of a two-mirror system is shown in Fig. 36 (left). The design of a 6-m conventional telescope is pursued in parallel (Fig. 36, right). The costs of these two fundamentally different concepts are now being evaluated. The result will determine which SST design will be selected. Conceptual layouts of a small telescope. Left: two-mirror system; right: conventional one-mirror system. The dish is held at the edge and the azimuth movement is realised by a central bearing 9.2 Telescope optics and mirror facets 9.2.1 Telescope optics The reflector of each telescope images the Cherenkov light emitted by the air showers onto the pixels of the photon detection system. Apart from the total reflective area, which determines the amount of light that can be collected, the important parameters of the reflector system are: The point spread function. The PSF quantifies how well the reflector concentrates light from a point source. The RMS width of the PSF should be less than half the pixel diameter for 40% containment if centred on a pixel, (for a Gaussian PSF), or better than 1/3 of the pixel diameter for 68% containment. The time dispersion. Different light paths through the telescope results in a dispersion in the arrival time of photons on the camera, which should not exceed the intrinsic width of about 3 ns of the Cherenkov light pulse from a gamma-ray shower. The reflector is usually segmented into individual mirrors. For the optics layout, most current instruments use either a parabolic reflector, which minimises time dispersion, or a Davies–Cotton design [100], where mirror facets of focal length f (and hence radius of curvature 2f) are arranged on a sphere of radius f (see Fig. 37), and which provides improved off-axis imaging. At the large field angles required for imaging Cherenkov telescopes, single-mirror designs suffer from significant optical aberrations with a resulting increase in PSF. Dual-mirror designs can provide significantly improved imaging, at the expense of a more complex telescope design [66]. Davies–Cotton mirror optics, with mirror facets of focal length f arranged on a sphere of radius f For a parabolic reflector of diameter d, focal length f and focal ratio F = f/d, the RMS width of the PSF can be approximated by [65] $$ \sigma^2_\zeta = \frac{1}{512} \frac{\delta^2}{F^4} + \frac{1}{16} \frac{\delta^4}{F^2} \qquad {\rm and} \qquad \sigma^2_\eta = \frac{1}{1536} \frac{\delta^2}{F^4} $$ where δ is the field angle and σ ζ and σ η are the widths of the PSF in the radial and azimuthal directions, respectively. The spot size is always larger in the radial direction, mostly due to the non-Gaussian tails of the PSF. For a parabolic reflector, the two spot dimensions differ by a factor of more than 1.7, resulting in systematic distortions of Cherenkov images for off-axis sources. For a Davies–Cotton reflector with a planar focal surface, the corresponding expressions are [66] $$ \begin{array}{rll} \sigma^2_\zeta &=& \frac{1}{1024} \frac{\delta^2}{F^4} \left(1 - \frac{1}{4 F^2}\right) + \frac{1}{256} \frac{\delta^4}{F^2} \left(4 + \frac{35}{6 F^2}\right) \qquad {\rm and} \\ \sigma^2_\eta &=& \frac{1}{1536} \frac{\delta^2}{F^4} \left(\frac{10}{9} + \frac{9}{32 F^6}\right). \end{array} $$ The difference between the radial and azimuthal spot sizes is less pronounced in this case, typically around 20%. The Davies–Cotton design results in a flat distribution of photon arrival times, with a maximum time difference of D/(8F ·c), and an RMS time dispersion \(\sigma_t = d/(16\sqrt{3}F \cdot c) \approx 0.12 d/F\) ns/m. Usually, the first term in the expansions for the PSF dominates, resulting in a roughly linear increase of the PSF with the field angle δ, and a quadratic dependence on F. For typical parameter values, σ ζ is 20–30% smaller for the Davies–Cotton design than for a parabolic mirror, whereas σ η values are similar. The expressions given above assume perfect shapes of the mirror facets, and very small facets for the Davies–Cotton design. In real applications, individual mirror facets will have an intrinsic spot size, which to a first approximation must be added quadratically to the PSFs given above. Parabolic mirrors can be constructed using spherical facets with focal lengths that are adjusted in 2–3 steps, rather than varying continuously according to their radial position. The optimal radii r 1 and r 2 for aspherical mirrors at a distance R from the optical axis of a parabolic dish of focal length f are $$ \frac{r_1}{2f} = \sqrt{1+\frac{R^2}{4f^2}} \approx 1 + \frac{R^2}{8f^2} {\rm \qquad and \qquad} \frac{r_2}{2f} = \sqrt{\left(1+\frac{R^2}{4f^2}\right)^3} \approx 1 + \frac{3R^2}{8f^2} \quad . $$ Use of spherical facets will cause a typical contribution to the spot size of order (d/8f)2, equivalent to that caused by the typical spread of 1% in facet focal length. Effects on the PSF are hence modest. The same holds for the influence of the facet size in the Davies–Cotton layout, as long as the number of facets is still large. Figure 38 illustrates how the PSF varies across the field of view, for different values of f/d, based on a realistic Monte Carlo simulation, including the effects of the PSF of the individual mirrors, the alignment inaccuracy, and the use of spherical mirror facets for the parabolic reflector. PSF (RMS) as a function of field angle, for a parabolic dish of different f/d (left) and for a Davies–Cotton dish (right). Full lines represent the radial component of the PSF, dashed lines the transverse component For the SST and MST, among single-reflector designs a Davies–Cotton geometry provides the best imaging over a large field of view. For the LST only a parabolic dish is possible due to the large time dispersion a Davies–Cotton design would introduce. To achieve a PSF of 3′ over a 7° field of view, an F value of about 1.5 is required. Dual-mirror telescopes have so far not been used in Cherenkov astronomy, but obviously allow improved compensation of optical errors over a wide field of view. In [66] dual-reflector designs are discussed in depth, with particular emphasis on the Schwarzschild–Couder design which combines a small plate scale (adapted to the use of multi-anode PMTs as photo-sensors) with a 3′ PSF across a 5° radius field of view (see Fig. 39). Compared to single-reflector designs, where the camera has to be supported at a large distance F·d from the dish, the dual-reflector design is quite compact. Drawbacks include the fact that non-spherical mirrors are needed, which are more difficult to fabricate, and that the tolerances on the relative alignment of optical elements are rather tight. Also, the large secondary reflector results in significant shadowing of the primary reflector. CTA's US collaborators, together with some European groups, plan to build a Schwarzschild–Couder telescope of 12 m. While current CTA designs are based on single-reflector telescopes, a dual-reflector construction could be adopted in particular for the SST or the MST, should the developments prove promising. Dual-reflector optics design for Cherenkov telescopes providing an improved PSF over a large field of view combined with a small plate scale [66] To realise the PSFs given above, obviously the orientation of the mirror facets has to be stable to a fraction of the PSF under varying dish orientations, temperatures, temperature gradients, and wind loads. Due to the reflection, orientation errors enter with a factor of 2 into the PSF. The facet orientation can be stabilised either by using a rigid dish, or by active compensation of dish deformations. For example, the mechanical structure of the H.E.S.S. telescopes is designed to keep the facet orientation stable to within 0.14 mrad (0.5′) RMS over the elevation range 45–90° and the operational range for wind loads and temperatures [101]. In MAGIC, an active mirror alignment system compensates for dish deformations [102]. The initial alignment of the mirror facets, as well as the calibration of active systems, is usually carried out using images of bright stars and has been demonstrated to have a precision well below the typical 3′ PSF (e.g. [83]). Of additional interest is the precision with which the real dish shape needs to approximate the ideal shape. Use of straight beam segments to approximate a curved dish may simplify production considerably. Two effects matter: an otherwise ideal facet displaced by δz along the optical axis will generate a spot of angular diameter \(\Delta \zeta = d_{\rm facet} \delta z / f^2\) where d facet is the facet diameter. The corresponding RMS is σ ζ = Δζ/4. Typically, facets have a PSF of 1 mrad diameter or better. Limiting additional contributions due to imperfect facet placement to 1 mrad, which implies that they matter only near the centre of the field of view, where the facet PSF dominates imaging, one finds \(\delta z < 10^{-3} f^2/d_{\rm facet}\), or 0.2 m for f = 15 m and d facet = 1 m. If the focal distance is wrong for a given facet, the spot location for off-axis rays will also be shifted by Δζ = δΔz / F, which should again be small compared to the spot size, for typically requiring Δz < 0.1 m. Another limit comes from the time dispersion introduced by this deviation, which is ΔT = 2 Δz/c, implying that Δz should not exceed 0.1–0.2 m. In summary, mirror placement along the optical axis should be within 10 cm of the nominal position for the MST. 9.2.2 Mirror facets Because of its large size, the reflector of a Cherenkov telescope is composed of many individual mirror facets. It is therefore important to balance the ease and cost of production techniques against the required optical precision. In total, CTA will need of the order of 104 m2 of mirror area, an order of magnitude more than current instruments. As the telescopes are required to observe the Cherenkov light emitted from the many particle tracks of an extensive air shower, the necessary optical precision of the mirror system is relatively relaxed. Focusing can be worse than is required for mirrors for optical astronomy by about two orders of magnitude, and the distance of the mirror facets to the focal plane needs to be correct only to within a few cm, as opposed to the sub-wavelength precision needed in optical astronomy. The mirror facets for CTA will probably have a hexagonal shape and dimensions of 1–2.5 m2. Large mirror facets have the advantage that they reduce the number of facets on a dish and the number of support points and alignment elements required. On the other hand, in particular for Davies–Cotton optics, the optical performance worsens as mirror facets become larger. Also, the choice of manufacturing technologies becomes rather limited. For these reasons, the current baseline for the MST is to use hexagonal mirrors of 1.2 m (flat-to-flat) diameter. Performance criteria for facets are equivalent to those for current instruments as regards the spot size, the reflectance and requirements on the long-term durability. The reflected light should largely be contained in a 1 mrad diameter area, the reflectance in the 300–600 nm range must exceed 80%, and facets must be robust against ageing when exposed to the environment at the chosen site for several years. Spherical facets are in most cases a sufficiently accurate approximation. For a parabolic dish, a variation of facet focal length with distance from the dish centre may be considered, although gains are modest for a dish with relatively large f/d. Several technologies for the production of mirror facets for Cherenkov experiments were used in the past, or are under development at present. These can be divided into two classes: technologies using grinding/polishing or milling of individual mirrors, as used for most current instruments; and replication techniques, where mirrors are manufactured using a mould or template, which has obvious advantages for mass production. Facet types produced using grinding or milling techniques include: Glass mirrors which have been the standard solutions for many past and present Cherenkov telescope (e.g. HEGRA, CAT, H.E.S.S., VERITAS). The mirrors were produced from machined and polished glass blanks that were front-coated in vacuum with aluminium and some weather-resistant transparent protection layer, such as vacuum deposited SiO2 (HEGRA, CAT, H.E.S.S.), or alternatively Al2O3 applied by anodisation (VERITAS). These mirrors exhibit high reflectivity and good PSFs and there is extensive production experience. Drawbacks are their fragility and weight, in particular if facets of ≥1 m2 are considered. Their front-side coating shows relatively fast ageing and degradation when exposed permanently to the wind and weather. A typical degradation of the reflectance of around 5% per year is observed for a single ∼100 nm SiO2 protection layer. Production and handling of thin (few cm) and large (1 m2) facets is non-trivial. Diamond-milled aluminium mirrors are used in the MAGIC telescopes [103]; these light-weight mirrors are composed of a sandwich of two thin aluminium layers, separated by an aluminium hexcell honeycomb structure that ensures rigidity, high temperature conductivity and low weight (see Fig. 40). After a rough pre-milling that ensures approximately the right curvature of the aluminium surface, the mirror is precisely machined using diamond-milling techniques. A thin layer of quartz of ∼100 nm thickness, with some carbon admixture, is plasma coated on the mirror surface for protection against corrosion. Diamond-milled mirrors have proven more resistant to ageing effects (reflectance loss of 1–2% per year) than mirrors with a thin reflective coatings on glass or other substrates, presumably since the reflective layer cannot be locally destroyed. On the other hand, the initial reflectance of diamond-milled mirrors is a few percent lower. Various mirror types under consideration for CTA. Top: Diamond-milled aluminium honeycomb mirrors. Middle left: Cold slumped glass-foam sandwich mirrors. Middle right: Open fibre-reinforced plastics mirror (carbon fibre or glass fibre). Lower left: Carbon-fibre composite mirror with CFRP honeycomb. Lower right: Carbon-fibre composite mirror produced with SMC technology An ongoing development is the mass production of mirror panels by means of replication technologies. These are cost effective and can be used to produce non-spherical and very light-weight mirrors with good and reproducible optical quality. Replication methods look to be promising for the large-scale production of CTA mirror facets and will be considered as the baseline design, although long-term tests are still required. Replica production methods include: Cold slumped glass mirrors. The mirror panels are composed of two thin glass sheets (1–2 mm) glued as to a suitable core material, giving a structure with the necessary rigidity. Construction proceeds as follows: At room temperature, the front glass sheet is formed to the required optical shape on a master by means of vacuum suction. The core material and the second glass sheet are glued to it. After the curing of the glue, the panel is released from the master, sealed and coated in the same way as a glass mirror. Half of the mirrors of the MAGIC II telescope were produced with this technology using an aluminium honeycomb Hexcell structure as core material [104]. For CTA, other core materials are under investigation, such as various foams. Especially promising is an all-glass closed-cell foam that can be pre-machined to the required curvature (see Fig. 40). Further investigation of the effects of thermal insulation between the front and the back of the mirror caused by foams is required. Aluminium foil mirrors. Aluminium honeycomb sandwich mirrors with reflective aluminium sheets of 1–2 mm thickness (made e.g. by the company Alanod) are also being studied in detail. Their main limitation currently results from the imperfect reflection properties of the aluminium foil. Fibre reinforced plastics mirrors. Several attempts are being made to use carbon- or glass-fibre reinforced plastic materials to produce light-weight mirror facets. Three different technologies are currently under development for CTA: (a) an open sandwich structure of glass-fibre or carbon-fibre reinforced plastic, consisting of two flat plates and spacers with either an epoxy layer cast on one plate or a bent thin glass sheet glued to it to form the mirror surface; (b) a closed structure of two carbon-fibre reinforced plastic plates bent to the required radius of curvature, an intermediate pre-machined CFRP honeycomb for stability and a thin glass sheet as reflecting surface; and (c) a one-piece design using a compound containing carbon-fibre and the high-temperature and high-pressure sheet moulding (SMC) technology, which is frequently used in the automotive industry. To form a smooth surface in the same production step an in-mould coating technology is under investigation which would allow for production times of the order of just a few minutes per substrate. See Fig. 40 for the different mirror types. Since the mirrors are permanently exposed to the environment, degradation of mirror reflectivity is a serious concern. In the case of aluminium-coated mirrors, water can creep along the interface of the glass and aluminium layer because the aluminium does not stick perfectly to the glass surface and the protective layers often have pin holes. In contrast, solid aluminium mirrors show localised corrosion, which, even when deep, affects only a very small fraction of the surface. Possible cures for the glass mirrors could be intermediate layers improving adhesion, e.g. of chromium or SiO, or more resistant protection layers, for example multiple layers which reduce the probability of pin holes in the coating. Multi-layer protective coatings could also be used to enhance reflectivity in the relevant wavelength region. These are under investigation as are purely dielectric coatings without any aluminium, which consist of multiple layers with different refractive indices. These latter can in principle provide reflectances of up to 98% and would not suffer from the rather weak adhesion of aluminium to glass. Another option to improve the mirror lifetime is to apply the reflective coating to the protected back side of a thin glass sheet, which could then be used in the replication techniques described above. Disadvantages are transmission losses in the glass, the requirement of a very uniform glass thickness, as the mould defines the shape of the front side but the reflective layer is on the back, and, in addition, icing problems due to radiation cooling of the front surface. In summary, many different technologies for the production of mirror facets are under investigation. For several of them, large-scale production experience exists already, others are in a development phase. A challenge in mirror production will be to find the optimum compromise between mirror lifetime and production costs. Current production costs are 1,650 €/m2 for the 0.7 m2 H.E.S.S. II glass mirrors, 2,450 €/m2 for the 1.0 m2 MAGIC II milled-aluminium mirrors, and 2,000 €/m2 for the 1.0 m2 MAGIC II cold-slumped glass mirrors. The much larger production scale of CTA and the use of optimised techniques is expected to result in a significant reduction in cost, in particular for the replication technologies. Current baseline specifications for MST mirror facets are summarised in Table 2. Baseline specifications for mirror facets (MST) 1,200 mm flat-to-flat Spherical (aspherical opt.) ∼16 m Reflectance >80% between 300 and 600 nm <1 mrad diameter (80% containment) 9.2.3 Mirror support and alignment To achieve design performance, mirror facets need to be aligned with a precision which is about an order of magnitude better than the optical point spread function, i.e. given the PSF requirement of <1 mrad the alignment precision needs to be well below 0.1 mrad or 100 \(\upmu\)m (assuming a typical 1 m lever arm between mirror support points). Various alignment methods are in use for existing telescopes: Manual alignment. Using an appropriate adjustment mechanism, mirror facets are manually aligned after mounting. For technical reasons, alignment is usually performed at or near the stow position of the telescope. Deformations in dish shape between the stow position and the average observation position (at 60–70° elevation) can be compensated by "misaligning" mirrors by the appropriate amount in the stow position. This scheme is used in the VERITAS telescopes [105]. Actuator-based alignment. Initial alignment of mirrors is carried out by remote-controlled actuators, using the image of a star viewed on the camera lid by a CCD camera on the dish, and implementing a feedback loop which moves all facet spots to a common location. This scheme is employed by H.E.S.S. [83]. Active alignment. Remote-controlled actuators are used not only for initial alignment of facets, but also to compensate for deformations of the dish, in particular as a function of elevation. If dish deformations are elastic, reproducible and not very large compared to the point spread function, alignment corrections can be based on a lookup table of actuator positions as a function of telescope pointing. If deformations are large or inelastic, a closed feedback loop can be implemented by actively monitoring facet pointing, using lasers attached to each facet and imaged onto a target in the focal plane. Active alignment is used by MAGIC [102]. Technically, the requirements for the actuator-based alignment and the active alignment are very similar, the main difference being that for active alignment of a significant fraction of facets need to be moved simultaneously or nearly simultaneously as telescope pointing changes, requiring parallel rather than serial control of actuators and a higher-capacity power supply. Since manual alignment of the large number of CTA mirror facets is impractical, certainly the medium-sized and large telescopes will be equipped with actuators. The small and medium sized telescopes will have mechanically stable dish structures which do not necessarily require active control, but active (look-up table driven) mirror control could be implemented to maintain optimum point spread function over the entire elevation range. Desirable features for actuators include a movement range of at least 30 mm and a built-in relative or, better, absolute position encoder which allows the actuator to be moved by an exact pre-defined amount. This is particularly relevant for lookup-based corrections. For active alignment, the positioning speed needs to be such, that the changes in mirror alignment are performed within the time needed to move the telescope to a different position. For actuator based alignment it is sufficient to be able to perform an initial alignment within a few days and possible re-alignments within a few hours. When not moving, actuators should be self-blocking to avoid movements e.g. in the case of power failure. The actuators need to perform reliably and without significant maintenance over the expected lifetime of the CTA array of over 20 years. (The mean time between failure (MBTF) should be 100 years.) Figure 41 shows a prototype actuator design based on a spindle driven by a stepper motor, with a combination of a digital Gray-code rotation encoder and analogue signals from four Hall probes providing absolute position sensing. The actuator is controlled by wireless communication using the Zigbee industry standard, with each actuator identified by a unique (48 bit) code. A broadcast mode is also available, which could be used to communicate the current elevation to all actuators allowing the controller to look up and apply the relevant individual correction values. Prototype mirror actuator based on a stepper-motor driven spindle, providing absolute position encoding and a wireless control interface A second solution can be seen in Fig. 42. The upper part of the figure shows the motor, two actuators, and the micro-controller board of one mirror unit. This device uses servo motors with a Hall sensor attached to the motor axis which makes possible relative positioning of the actuator with high accuracy. The communication is based on CAN (Controller Area Network), a multi-master broadcast serial bus standard which is used in the automotive industry and other areas where there is demand for high reliability. The communication of the telescope units with the control computer is done via Ethernet. The electronics layout is depicted in the lower part of the figure. Upper part: Prototype mirror control actuators, motor, and micro-controller board for the solution based on relative position encoding. Lower part: Electronics layout of the setup Mirror facets will be attached at three points, two equipped with actuators and one universal joint. The facet mounting scheme should allow the installation of the facets from the front, without requiring access from the space-frame side of the dish. This can be achieved by supporting mirrors at the outer circumference, where attachment points are easily accessible, or using screws or attachment bolts going through the mirror. Current baseline specifications for the mirror alignment system are summarised in Table 3. Baseline specifications for mirror alignment actuators <0.1 mrad Initial alignment <few days Re-alignment (actuator based alignment) <few hours Re-alignment (active alignment) <slewing time to new position 9.3 Photon detection, electronics, triggering and camera integration The cameras developed for gamma-ray detections with current atmospheric Cherenkov telescopes have reached the sensitivity required to perform detailed investigations of many astrophysical sources. Further advancing Cherenkov telescope performance requires, in particular, that the energy range covered be extended, i.e. that the gamma-ray energy threshold be reduced and detection capabilities be extended at high energies, enhancing the flux sensitivity, and improving angular resolution and particle identification. Lowering the threshold energy and increasing the sensitivity of an IACT requires that more Cherenkov photons be collected and/or that these are detected more efficiently. The efficiency of the collection of Cherenkov photons and their conversion to photoelectrons in the photo-sensor must therefore be improved; the non-sensitive regions (dead areas) in the camera must be minimised, for example by using light guides, the effective photon conversion efficiency increased by exploiting novel technical developments. Enlarging the energy range requires appropriate electronics with a sufficiently large dynamic range. Achieving the required performance necessitates the development and the production of electronics components dedicated to CTA. Sophisticated application-specific integrated circuits (ASICs) for equipping the front-end part of the readout chain are under study. These have the advantage that they minimise signal distortion, decrease the power consumption and ultimately reduce the cost of the experiment considerably. Integrated readout systems take advantage of the recent development of analogue memories for data buffering. An alternative solution is a fully digital readout scheme. The amplified signal from the photon sensors is directly digitised by an analogue-to-digital converter (ADC) and buffered in a deep memory. Readout and triggering benefit from continuous data storage to avoid deadtime. The integration of detectors and the associated electronics reduces the size of the apparatus and embedded cameras have operational advantages, particularly at an isolated site and given the number of about one hundred cameras that will be required for CTA. The usage of complete spare cameras, rather than spare components for these cameras, can significantly simplify the maintenance of the system. The camera typically consists of a cylindrical structure built completely from low mass components. It holds a matrix of photon sensor cells, carefully optimised to make maximal use of the incoming light, and is fixed to the arms of the telescope in the focal plane, above the dish of the telescope. For embedded cameras, the light sensor, readout and trigger system, data acquisition system and power supply are integrated in a modular mechanical structure. The only connections that enter the camera are the input power, the communication network, and any central trigger cables. Disadvantages are a heavy camera requiring considerable cooling power and a heavy camera support structure. 9.3.1 Photon detection The photon sensors most commonly used in IACTs are photomultipliers with alkali photo cathodes and electron multipliers based on a chain of dynodes. The technology is well-established, but is subject to continuous development and improvement. PMTs have established themselves as the best available low light level sensors for ultra-fast processes. The relatively high peak quantum efficiency (QE) currently available (up to 30%), together with high gains of up to 106 and low noise, allow the reliable measurement even of single photoelectrons. A dynamic range of about 5,000 photoelectrons is obtainable with PMTs. The PMTs convert impinging photons into a charge pulse of size measured in number of photoelectrons. IACTs usually use PMTs with bialkali type photo-cathodes, as these provide the highest QE. They are sensitive in the wavelength range of 300–600 nm (200–600 nm if a PMT with a quartz window is used). The bialkali PMT sensitivity curve is well-matched to the spectrum of Cherenkov light arriving at ground level from air showers. As a rule, one needs to amplify this pulse in order to match the sensitivity of the data acquisition (DAQ) electronics. However, new photon detectors are under study and the CTA cameras must be designed to allow their integration if their performance and cost provide significant advantages over PMTs. Criteria for photo-detectors Spectral sensitivity The spectrum of Cherenkov light is cut off below 300 nm, due to atmospheric transmission effects, and falls off as 1/λ 2 towards longer wavelengths.5 Candidate photo-detectors should be matched to the peak in this spectrum at around 350 nm. At large wavelengths, beyond about 550 nm, the signal-to-noise ratio becomes increasingly unfavourable due of the increasing intensity of the night sky background in this region. Above ∼650 nm strong emission lines are present in the Cherenkov spectrum, originating from the rotational levels of (OH) groups. It is therefore desirable but not essential to measure up to wavelengths of about 600–650 nm. (The more accurately the absolute charge in an image is measured, the better the absolute calibration.) Sensor area Currently favoured pixel sizes are around 0.1° for the LST, 0.18° for the MST, and 0.25° for the SST. For conventional telescope designs (single mirror optics, with Davies–Cotton or parabolic reflectors), these angular sizes translate to linear dimensions of 40, 50 and 35 mm, respectively. If a secondary-optics design is used for the SST, a size of 0.2° represents around 6 mm. For the secondary-optics design for the MST, a smaller angular pixel size of 0.07° equates to the same physical size of 6 mm. Light-collecting Winston cones in front of any sensor reduce the required sensor size by a factor of 3–4 compared to the pixel size and can decrease the amount of dead space between pixels. Sensor uniformity Sensor non-uniformities below ∼10% are tolerable. Larger non-uniformities should be avoided as they introduce an additional variable component in the light collection and thus increase the variance of the output signal. Dynamic range and linearity Sensors should be able to detect single photons and provide a dynamic range of up to 5,000 photo-electrons, with linearity deviations below a few per cent. Non-linearities can be tolerated if they can be accurately corrected for in the calibration procedure. Temporal response The time dispersion of Cherenkov photons across a camera image depends on the energy of the primary gamma ray. At low energies, the dispersion is only few nanoseconds. Matched short signal integration windows are used to minimise the noise. The photo-sensor must not significantly lengthen the time structure of a Cherenkov light pulse. It is desirable to determine the pulse arrival times with sub-nanosecond precision for sufficiently large light pulses. Lifetime Sensors will detect photons from the night-sky background at a typical rate of about 100–200 MHz for the telescopes with large collection areas (MST and LST). If operation is attempted when the moon is up, this rate can increase by an order of magnitude. Sensors should have a lifetime of 10 years for an annual exposure of up to ∼2,000 h. This can be achieved using PMTs with only 6–8 dynodes, operated at a gain of 30,000–50,000, followed by a fast AC-coupled preamplifier. Rate of spurious signals Spurious signals from photo-detectors can result in an increase of trigger rates and a degradation of trigger thresholds. This is a particular issue for photomultiplier sensors where residual gas atoms in the tubes are ionised by impinging electrons. The resulting afterpulses, produced by positively charged heavy ions bombarding the photo-cathode, may have large amplitude and long delays relative to the primary electron. Photomultipliers should be selected with an afterpulse probability below (∼10 − 4–10 − 5). Operational characteristics To ensure efficient and reliable operation of the systems, sensors should show good short- and medium-term stability, and only gradual ageing, if any. Sensors should be able to survive high illumination levels. Cross-talk Although cross-talk for photomultipliers is very low, it may be an issue for alternative sensor solutions such as silicon photomultipliers or multi-anode photomultipliers. Crosstalk between adjacent pixels must be kept ≤1%. Cost and manufacturing considerations In total, the CTA consortium is intending to use ∼105 sensor channels. Thus, the photo-detectors comprise a major fraction of the total capital cost of the project and any innovations which allow their cost to be reduced should be carefully considered. One important criterion is that the manufacturer/supplier must be able to provide the necessary number of sensors to the required specification with an acceptable and reliably known lead time. Spectral response of several types of super bi-alkali PMTs from Hamamatsu (green, red and black) and Electron Tubes Enterprises (yellow and blue), compared to the spectrum of Cherenkov light produced by vertical 100 GeV gamma rays on the ground (grey, dashed), convoluted with the standard atmospheric transmission for the observation height of 2,200 m a.s.l.. The numbers in the inset give the convolution of the QE curve of a given PMT with the dashed line Candidate photo-detectors The baseline photo-detector for CTA is the PMT. However, there may be alternative solutions that reach maturity on approximately the right timescale for CTA construction. Modular cameras for the LST, MST and SST are therefore desirable to allow the exchange of photo-detectors without major alterations to the trigger and readout electronics chain. In the case of a secondary optics design of the MST or SST, conventional PMTs are not available in the appropriate physical size, and therefore the choice of a secondary optics telescope design would depend heavily on the availability of alternative photo-detectors, such as those presented here. Baseline solution—photomultipliers The spectral sensitivity of conventional PMTs, see Fig. 43, with their falling sensitivity at large wavelengths, provides a reasonably good match to the spectrum of Cherenkov light on the ground. The baseline solution for CTA is to use PMTs with enhanced quantum efficiency compared to those currently used in H.E.S.S., for example. Such tubes are becoming commercially available and offer ∼50% advantage in photon detection efficiency over conventional PMTs. Silicon photomultipliers (SiPMs) (known also as MPPCs, GAPDs and Micro-channel APDs) are novel light sensors that are rapidly reaching maturity. The more recent SiPMs consist of single pixels which contain several hundred to thousands cells, coupled to a single output, Each cell is operated in Geiger mode. An arriving photon can trigger the cell, after which that cells suffers significant deadtime, but leaves the surrounding cells ready to collect other arriving photons. The photon-counting dynamic range is comparable to the number of cells. Silicon photo-sensors could provide higher photon detection efficiencies than the latest PMTs at lower cost and without the requirement for high-voltage. However, silicon sensors typically require cooling to reduce the dark count to a manageable level and also suffer from optical cross-talk and are not as well matched to the Cherenkov light spectrum as PMTs. They therefore require further improvement and commercialisation. However, depending on the time scale and cost of such a development, SiPMs could be considered as a candidate sensor for replacing the PMTs or, alternatively, as an upgrade path for all telescope sizes. They are of particular interest for the SST secondary optics option, where their physical size is better suited to the plate scale of the telescope. Multi-anode photomultipliers MAPMTs provide multiple pixels in a compact package, with properties similar to monolithic PMTs. Such devices offer individual pixel sizes of the order of 6 mm, suitable for secondary optics schemes. Enhanced quantum efficiency versions with up to 64 channels are now available. The suitability of MAPMTs must be assessed, and properties such as the uniformity, cross-talk, dynamic range and detection efficiency are currently under investigation. Associated systems Light-collecting Winston cones Winston cones placed in front of any sensor could reduce the required sensor size by a factor of 3–4 (see Fig. 44). However, compression is limited by Liouville's theorem, which states that the phase-space volume of an ensemble of photons is conserved. Lightcones can minimise the dead space between pixels and reduce the amount of stray light from the night sky impinging on the sensors at large incidence angles. Figure 44 illustrates the typical angular response of a light funnel. Current lightcones have a net transmission of about 80%. Improved cones may allow increased performance at modest cost. Left PMT pixel cluster with light funnels. Right Angular response of a typical light funnel, normalised to the on-axis response Plexiglas input window To avoid the deposition of dust on the photo-detectors and lightcones (if used), a Plexiglas window could be utilised to seal the camera. The transmission losses of ∼8% for a 3 mm thick GS 2458 window may be considered as well-justified because of the absence of deterioration of the light throughput on long time scales. The use of a sealed camera and plexiglas window must be investigated for each telescope size. For the LSTs and MSTs, sealing the camera does not significantly increase its total cost. For the SST, a sealed system may represent a significant proportion of the cost for the camera, but give advantages in maintenance and long-term performance. PMTs and MAPMTs need to be provided with a stable and adjustable high voltage supply. The first dynodes are often supplied through a passive divider chain, the last using an active divider to provide more power, improving the dynamic range and allowing stabilisation. The HV system also needs to provide a current-limiter or over-current trip circuit for protection in case of excessive illumination of the PMTs, due to bright stars, moon shine or, even worse, daylight. Several options are under study for CTA: (a) Cockcroft-Walton type, (b) transistor-based active divider type and (c) one central power supply providing individually attenuated voltages to different channels. Signal recording electronics Air-shower induced photo-sensor signals have a pulse width of a few ns, superimposed on a random night sky background with typical rates of some 10 MHz to more than 100 MHz, depending on mirror size and pixel size (which is therefore different for the LST, MST and SST). Optimum capture of air-shower signals implies high bandwidth and short integration times. Ideally, the dynamic range and noise should be such that single photoelectron signals are resolved, and signals of a few thousand photoelectrons are captured without truncation. The recording electronics must delay or store the signals whilst a trigger is generated, indicating that the event is to be captured and read out. The generation of a trigger signal could take from 0.1 to a few μs within a single telescope, depending on the complexity of the trigger scheme, and ≥10 μs if trigger signals between several telescopes are combined. Advances in signal recording and processing provide the possibility of recording a range of signal parameters, from the integrated charge, to the full pulse shape over a fixed time window. Whilst it is not yet clear that the full pulse shape is needed, it is desirable to record at least a few parameters of the pulse shape rather than just the integrated charge. In this way, absolute timing information would be available, allowing improved background rejection and adaptive integration windows. Increasing the bandwidth of the signal recording system will allow improved timing and shorter integration gates, resulting in reduced levels of night sky background under the signal. However, as the bandwidth of the system is increased, so is the cost. Whilst such an approach may be justified for the LST, where night sky background is high, the Cherenkov pulses are very fast and the number of telescopes is low, this is not necessarily the case for the SST, where the night sky background is low, the Cherenkov pulses are not as fast and any cost savings could be used to build more telescopes. The bandwidth of the electronics chain for a given telescope size should be motivated by examining its consequences for the array sensitivity and energy threshold through Monte Carlo simulations. Currently, there is no clear answer as to the optimum choice for any telescope size, and the signal sampling frequencies under discussion range from a few 100 MSample/s to ∼2 GSample/s. Two techniques for signal recording and processing are in use in existing IACT arrays, These are based around Flash Analogue-to-Digital Converters (FADCs) and analogue sampling memories, and form the basis for the CTA development: Flash analogue-to-digital converters FADCs digitise the photon-sensor signals at rates of a few 100 MSample/s to a few GSample/s, writing the output into a digital ring buffer, often realised as a very large scale integration gate array which also provides control logic and digital readout. The modest cost of digital buffers allows large trigger latency; delays of tens of microseconds can be realised. However, the dynamic range of FADCs is limited and typically no more than 8–10 bits are available, requiring either parallel conversion with different gains or dynamic gain switching, as used in the 500 MSample/s, 8-bit VERITAS FADC system [106]. The rather high cost of the fastest FADCs has led to the development of systems in which several channels are time-multiplexed onto one ADC, as used in the MAGIC 2 GSample/s, 10-bit FADC system. In principle, FADC based recording systems allow the use of a purely digital trigger, acting on the digitised data in the ring buffer, to select air shower events. Such a system is sketched in Fig. 45. None of the systems implemented so far uses this approach. Instead, parallel analogue trigger circuitry is used, adding not insignificant complexity to the electronics layout. The steadily increasing power of VLSI gate arrays may soon make digital trigger processors an attractive and feasible option. FADC based recording systems with purely digital trigger acting on the digitised data As well as being expensive, FADCs suitable for IACTs are traditionally also bulky and power hungry, negating the possibility of integrating the readout electronics into the camera and requiring the transmission of analogue signals over many tens of meters to a counting house. However, the recent development of low-power, low-cost FADCs in recent years imply this situation may be changing, at least for modest-speed FADCs. In response to this, a 250 MSample/s system, named FlashCam, is under development for CTA. Monte Carlo simulations have shown that, at least for the MST and SST, 250 MSample/s is a sufficiently fast sampling rate to allow correct pulse shape reconstruction. Hardware prototyping is under way to confirm this simulated result. The sensitivity of the complete array with such a readout system must still be assessed. Analogue sampling memories Analogue sampling memories consist of banks of switched capacitors which are used in turn to record the signal shape. The maximum recording depth is given by the sampling time multiplied by the number of storage capacitors, which ranges from 128 to a few 1,000, implying at most a few microseconds of trigger latency. Trigger signals are derived using additional analogue trigger circuits. Current ASIC implementations stop the recording of signals after a camera trigger and initiate the digitisation of the charge stored on a selected range of capacitors, thereby introducing front-end deadtime of few microseconds. The signal is then converted to a digital format using an ADC and can be stored in a local Field-Programmable Gate Array (FPGA) before transfer (see Fig. 46). The ADC is typically used to digitise the pulse integrated over a time window, and therefore can have a sampling frequency an order of magnitude lower than those considered in the FADC readout scheme. Additional information, such as the pulse width and arrival time, can also be stored, which is highly desirable. A First-in, First-out (FIFO) memory between the digital conversion and the FPGA can be used to smooth the distribution of arrival times of events to reduce fluctuations in the data acquisition rate. Analogue memory based recording systems. The analogue trigger is formed in parallel to the data shaping and buffering The dynamic range of analogue samplers is up to 12 bits. As with FADC-based systems, parallel channels with different gains or non-linear input stages can be employed to record a larger dynamic range of Cherenkov signals. Examples of such systems include the HESS I readout system, which is based on the ARS ASIC [107], the HESS II readout, based on the further developed Swift Analogue Memory (SAM) ASIC with significantly reduced readout deadtime [108], and the MAGIC II readout system, using the Domino Ring Sampler (DRS) ASIC [109]. Several analogue sampling based schemes are under development for CTA, including a project based on the next-generation version of the SAM chip, termed NECTAr, a DRS4 based project called Dragon and a project based on the Target ASIC originally intended for AGIS. The main parameters of some of these ASICs are summarised in Table 4. Characteristics of switched-capacitor signal-recording ASICs DRS4 GSamples/s Channels/chip Samples/channel Analogue bandwidth (MHz) Dynamic range (bits) Integrated trigger disc. Integrated ADC Integrated digital control logic (PLL) Typ. readout latency (μs) Power cons. (mW) Status/use HESS I HESS II MAGIC II While FADC systems may ultimately offer somewhat superior performance, analogue samplers could allow lower cost, in particular if much of the auxiliary circuitry surrounding and supporting the sampler ASIC, such as pixel trigger circuits, ADCs, digital buffer and readout controllers can be integrated into a single multi-channel ASIC (Fig. 47). This is analogous to the readout for silicon strip sensors, where single readout ASICs typically accommodate 128 channels and where the cost per channel is at the level of a few €. High-level integrated analogue sampling ASIC. The single ASIC amplifies, stores, and digitises the analogue signal, and buffers the digital data before sending them to the central camera recording system At the current stage of CTA electronics design, analogue samplers and FADCs will be pursued in parallel. Existing ASICs such as the SAM or DSR4 are probably adequate for use in CTA. MC simulations should help to decide if dual-gain channels are needed, which would imply significantly increasing electronics cost. A specific development effort is also aimed at producing nonlinear input stages providing signal compression. Readout electronics Readout of digitised data has so far either relied on custom-built bus systems to collect data from electronics units covering the camera focal plane (such as the "drawers" of the H.E.S.S. telescopes), or has located the digitisation electronics in commercial VME or PCI crate systems. As a flexible and cost-effective alternative, the use of commercial Ethernet systems has recently been explored [110], using normal switches to buffer data sent via a low-level Ethernet protocol (Fig. 48). A low-cost front-end gate array emulates the Ethernet interface. Data transfer is asynchronous, with buffering in the front-end gate array, eliminating a source of deadtime. To enable synchronisation, events are tagged at the front end with an event marker. In tests with 20 sender nodes, transmitting via a switch to a receiver PC, loss-free transmission of more than 1010 packets with a data rate of more than 80 MByte/s was achieved. Current up-to-date servers can operate with 2×4 GBit interfaces and cope with the resulting data flow. It is therefore expected that loss-free transmission of the front-end data, even of a 2,000 pixel camera operating at data rates of 600 MByte/s, should not be a problem. Nevertheless, various forms of zero suppression could be implemented in the front end, reducing data rates by up to an order of magnitude. Since the Ethernet system operates in full-duplex mode, it can also be used for the control and parameterisation of the front-end components, such as HV supplies, and to set parameters for triggering, digitisation etc. It would not be necessary to design a separate command bus, as employed in most current cameras. Possible scheme for an Ethernet-based front-end to back-end readout. A group of pixels with their ADCs is controlled by a dedicated FPGA. The same FPGA can be used to buffer the data and to transmit them through a dedicated Ethernet network to a camera computer (PC Server), which buffers the data in its RAM and preprocesses events before sending them to an event building farm 9.3.3 Triggering Triggering the telescopes Arrays of Cherenkov telescopes typically employ multi-level trigger schemes to keep the rate of random triggers from the night sky background low. At the first level, signals from individual pixels are discriminated above a threshold. These pixel-level signals are input to a second level, topological trigger. The topological trigger is used to identify concentrations of Cherenkov signals in local regions of the camera, via patten recognition or a sum of first-level triggers, to form a telescope-level trigger. A third, array-level trigger, is formed by combining trigger information from several telescopes. The trigger chain within a telescope may follow a digital, or analogue path. In H.E.S.S., Magic and VERITAS, analogue schemes are used, but for CTA several approaches for both options are under investigation. A digital scheme would require the continuous digitisation (with one or more bits) of the signal coming from the PMTs. Components that look for coincidences from digitised signals with a predefined timing are commercially available. In a digital scheme, the trigger is very flexible and almost any algorithm can be implemented, even a posteriori. Trigger algorithms and parameter settings for each camera can easily be adapted for each telescope type, array configuration and for the physics programme (e.g. energy range). Both sector and topological trigger concepts can be implemented in a digital trigger system. The information provided by a digital trigger is essentially "screenshots" of the camera every given time slice. Even if only one or a few bits are used to encode the trigger information, it may be worthwhile to add this information to the data stream. In the extreme case of a digital trigger with sufficient resolution, only the digital stream could be used, as is proposed for the FlashCam development. On the other hand, the digitisation frequencies available at reasonable cost may yield worse rejection of random triggers from the night sky background compared to an analogue approach. The telescope trigger is traditionally formed by looking for a number of pixels above threshold, or a number of neighbouring pixels above threshold, within the camera. This is typically implemented by dividing the camera up into sectors, which must overlap to provide a uniform trigger efficiency across the camera. By requiring several pixels to trigger at once, random fluctuations due to the night sky background and PMT afterpulses are greatly reduced. Alternative schemes are also under investigation. These include a sum trigger, which can lead to a significant reduction of the trigger threshold [111]. In the sum trigger, the analogue or digital sum of all pixels in a cluster is formed and a threshold is set to initiate a trigger. It is necessary to clip pixel signals before summing to prevent large afterpulses triggering a cluster (see Fig. 49). All these approaches can be implemented in both an analogue or a digital path. Trigger rate (in Hz) against the discriminator threshold (in photo electrons). NSB dominates at low thresholds. Without clipping afterpulses largely dominate the rate (blue), while they are effectively eliminated by clipping (red). The black points beyond 25 photo-electrons are due to cosmic showers The size of camera sectors and their overlaps have implications for the threshold and detection efficiency. Given the different goals, they may differ for the LST, MST and SST. Schemes for all telescope sizes are under investigation using Monte Carlo simulations. The shape and dimensions of the mechanical clusters put limitations on possible camera sectors and their overlap, but these are only of second order. The default reaction to a trigger is the read out of the entire camera. Also an autonomous-cluster trigger is under study, however. This allows sections of the camera to form trigger decisions independently and these are read out autonomously. In the high-energy range, observations at low elevation produce images that propagate in time through the camera. The propagation time can be much higher than the usual integration window of the Cherenkov signal acquisition. The autonomous readout would allow the recording of the time slot when the signal is in each section of the camera, following the propagation of the image through the camera. For low energy showers, the shower image covers a small region of the camera and it can be useful to read out only a part of the focal plane to save bandwidth on the network and to lower the deadtime of the system. The earlier a trigger system enters a purely digital level, the more easily and reliably it can be simulated. Schemes which rely on the addition of very fast analogue or digital signals are potentially more powerful, but could be sensitive to details of the pulse shapes and to the transition-time dispersion between different PMTs, requiring a pre-selection of PMTs with similar transition times or the implementation of matched delays to compensate for intrinsic differences. Note that no signal recording scheme rules out the use of a given triggering scheme, but the use of FADCs to record the signal would allow implementation of a digital trigger based on the already digitised signals, hence reducing the cost and complexity of the system. Triggering the array Current array trigger schemes for systems of Cherenkov telescopes [72] provide asynchronous trigger decisions, delaying telescope trigger signals by an appropriate amount to compensate for the time differences when the Cherenkov light reaches the telescopes, and scanning trigger signals for pre-programmed patterns of telescope coincidences. The time to reach a trigger decision and to propagate it back to the telescopes is about 1 μs or more. While FADC based readout systems can buffer signals for this period, analogue-sampling ASICs will usually not provide sufficient memory depth, and require the halting of waveform sampling after a telescope trigger, while awaiting a third-level telescope coincidence trigger. The resulting deadtime of a few μs limits telescope (second-level) trigger rates to some 10 kHz, which does not represent a serious limitation. The latest analogue-sampling ASICs allow digitisation of stored signals on time scales of 2–3 μs (see Table 4), comparable to the array trigger latency. In this case, a new option for the triggering of the array becomes possible: pixel signals are readout and digitised after each telescope trigger, and are stored in digital memory, tagged with an event number. Given that data are buffered and that buffers can easily be made large, restrictions on array trigger latency are greatly relaxed (with GByte memory, about 1 s of data can be buffered) and one can implement a software-based asynchronous trigger. With each local trigger, an absolute timestamp is captured for the event with an accuracy of the order of 1 ns and transmitted to the camera CPU. This computer collects the time stamps and possibly additional trigger information for each event, e.g. pixel trigger patterns, and transmits them every 10–100 ms via standard Ethernet using TCP/IP to a dedicated central trigger computer. The central computer receives all time stamps from all telescopes and uses this information to test for time coincidences of the events and to derive the telescope system trigger. In addition, the time and trigger information can be used to obtain a first estimate of the core position and shower direction. Following the central trigger decision, the central trigger CPU sends the information to the corresponding telescopes about which of the buffered events are to be rejected and which fulfil the system trigger condition and should be pre-processed in the camera CPU and transmitted for further stereoscopic processing. Assuming a local trigger rate of 10 kHz and that about 100 Byte of trigger information are generated from each telescope, the central trigger computer needs to handle up to 100 MByte/s in a 100 telescope system, which can be readily be done with today's technology. In such a trigger scheme, the central trigger decision is software-based, but the "hard" timing from the camera trigger decision is used. It is therefore scalable, fully flexible and all types of sub-systems can be served in parallel. At the same time it uses the shortest possible coincidence gates and provides an optimum suppression of accidental coincidences. 9.3.4 Camera integration Signal transmission from the photo-sensors to the recording electronics represents a critical design issue if the electronics is located far from the photo-sensors. Conventional cables limit bandwidth, are bulky and difficult to route across telescope bearings, and are costly. MAGIC uses optical signal transmission, circumventing the first two problems, at considerable expense. H.E.S.S. avoids signal transmission altogether by combining 16 photo-sensors and their associated electronics in "drawers", requiring only power and Ethernet connection to the camera [112], but limiting flexibility as regards upgrades of individual components. At least for the SST and MST, which are produced in significant quantity and where costs of the electronics is a decisive factor, the most effective solution seems to be to combine photo-sensors and electronics in the camera body. The design should allow easy swapping of the camera for a spare unit, allowing convenient maintenance and repair of faulty cameras at a central facility. However, over the expected lifetime of CTA, upgrades at least of the photo-sensors are likely. The same may be true for the trigger and data recording systems, where novel networking components may allow transmission of significantly larger amounts of digital data than is currently possible. A viable option could therefore be, rather than combining photo-sensors and electronics in a single mechanical unit, to build a photo-sensor plane with short connections to electronics units, which in turn feed a trigger system via a flexible interface (Fig. 51). For ease of mechanical assembly, both photo-sensors and electronics will be packaged into multi-channel units. Dual-mirror solutions, such as the Schwarzschild–Couder telescopes, require much smaller cameras and can therefore utilise cheap multi-anode photo-sensors. Figure 50 shows a possible solution considered for AGIS, using 64-pixel multi-anode PMTs [113]. Instrumentation of a 50 cm diameter camera for a dual mirror telescope from 64-pixel multi-anode PMTs. One pixel is about 6×6 mm2 Mechanical packaging of the entire camera and sealing against the environment is crucial for stable performance (Fig. 51). In its daytime configuration with closed camera lid, the camera body should be reasonably waterproof. Dust penetrating the camera and deposited on connectors and on optical components is a serious issue. To protect the photo-sensors and the light-collecting funnels and allow for easy cleaning, an optical entrance window made of near-UV transparent material is desirable, even if this induces a modest light loss due to reflection. While larger-scale integration should reduce power consumption compared to current systems, a camera will nevertheless consume kilowatts of power and must be cooled. Air cooling requires high-quality filtering of the airflow into the camera. Closed-circuit cooling systems, involving internal circulation of a cooling medium and appropriate heat exchangers improve long-term reliability, but add cost and weight. Concept for packaging of the electronic contained in the camera 9.4 Calibration and atmospheric monitoring The higher sensitivity of CTA means good gamma-ray statistics for many sources. Therefore, the instrument's systematic uncertainties may limit the accuracy of the measurements. The atmosphere is an integral part of an IACT and so monitoring and correcting for atmospheric inhomogeneity must be addressed in addition to the detailed calibration and monitoring of the response and characteristics of the telescopes. Work is ongoing to address both issues, as well as their interplay, with the goal of characterising the systematic uncertainties to an unprecedented level. Already teams of world-experts have gathered to develop state-of-the-art instrumentation for atmospheric monitoring and the associated science for CTA. These teams are actively participating in the corresponding CTA work package (ATAC). 9.4.1 Telescope calibration The calibration of the CTA telescopes has two distinct aspects. Firstly, the absolute gain of the system must be determined. Secondly, the pointing accuracy of the telescope must be measured. The necessity of the precise measurement of the gain of each electronic channel for CTA requires the development of a single and reliable calibration device which can measure the flatfielding coefficients and the ratio between a single photoelectron and the number of digital counts recorded. This development will add to existing experience in building calibration devices, for example the H.E.S.S. II flatfielding system as shown in Fig. 52. Overall, absolute calibration is achieved by reconstructing the rings generated by local muons. A special pre-scaled single-telescope trigger could be implemented to enhance the rate at which these are recorded. Layout of the H.E.S.S. II flatfielding and single photoelectron device. For a large array of telescopes, it is likely that the laser will be replaced by LEDs and that the mechanical filter wheel will be replaced by an electronic system In the development of the calibration apparatus, many challenges must be addressed. The first is the difficulty of uniformly illuminating large, wide field-of-view cameras. This problem is twofold: firstly, diffusers must be able to present a uniform signal to the edge of the field of view; secondly, the pixels across the field of view must uniformly accept the diffused signal on their photo-cathodes. This second aspect can be difficult to achieve when reflective lightcones on the camera edge have a different acceptance to those in the centre to a close-by, centrally diffused light source. The use of different colour light sources would allow the quantification of any differences and/or changes in the quantum efficiency of pixels. An additional challenge concerns the measurement of the single photo-electron response. Current telescope systems measure this either in-situ with a low light background level, or indirectly using photon statistics [114]. A comparison of these two methods allows the study of their associated systematic errors and the choice for the best system for CTA. The requirements of the telescope pointing measurement are somewhat simpler, but vitally important. Here, a system of two CCD cameras mounted on each telescope is envisaged. The first measures the position of the night-sky relative to the telescope dish and the second the position of the telescope camera relative to the dish. In combination, the system allows the astronomical pointing of the telescope to be assessed accurately. 9.4.2 Atmospheric monitoring The calibration of the CTA telescopes is one critical calibration and monitoring task, a second is the monitoring of the atmosphere which forms part of the detector. This is where the particle shower is initiated by the incident gamma-ray and the medium through which the Cherenkov photons must travel. The estimation of the energy of an individual gamma-ray is based on the calorimetric energy deposited in the atmosphere, which in turn is measured via Cherenkov photon emission. Therefore, any change in atmospheric quality can affect the signal detected. To investigate this effect, a set of benchmark simulations of a 97-telescope array design were initiated to test the performance of an array of imaging Cherenkov telescopes under the presence of varying atmospheric conditions. Simulations were produced for a clear atmosphere and an atmosphere with a significant layer of low-level dust, as derived from measurements taken with a 355 nm single-scattering Lidar deployed on the Namibian Highlands. These show that, if unaccounted for, the changing atmospheric quality produces a significant shift in the reconstructed gamma-ray spectrum. This can be seen in Fig. 53. Recovering spectral information for non-ideal observing conditions. From a full simulation database a randomly sampled spectrum of 105 events with spectral slope of E − 2.3 is drawn. These events are then reconstructed using simulation-based look-up tables which give the reconstructed energy as a function of the camera image brightness and the reconstructed distance to the shower. For different atmospheric conditions (described in Table 5), a reconstructed spectrum is derived. The open circles show the reconstructed differential spectrum for case 1, the open squares for case 2 and the closed triangles for case 3. By incorporating Lidar data into the reconstruction (case 3) a corrected spectrum can be recovered with approximately the same normalisation and slope as for a clear night sky (case 1) The combination of look-up tables (as derived from simulations) and simulated spectra produced to derive the effect of the atmosphere on reconstructed spectra illustrated in Fig. 53 Simulation derived from Lookup derived from Clear database Dusty database Many current Cherenkov telescope arrays have in-situ single-scattering Lidars. This type of Lidar possesses a strong and variable systematic error in the derived transmission, up to approximately 50–60% [115]. After discussions with members of the Pierre Auger Observatory (PAO) and other atmospheric scientists, CTA has decided to adopt the Raman Lidar technique as the tool of choice for accurately probing atmospheric quality. Below and around shower maximum, it is believed that this technique will reduce the systematic error in derived transmission to approximately 5% [115]. Therefore, Raman Lidars are currently under development, which will be installed at the sites of some existing Cherenkov telescopes in order to test their efficacy in ground-based gamma-ray analysis. If successful, these atmospheric monitoring systems will allow CTA to significantly reduce the systematic error in energy measurements and derived source fluxes. 9.5 Quality assurance Since the design study phase, the CTA project has included a work package named "Quality Assurance and Risk assessment". The objective of this WP is to implement a uniform approach to risk analysis in the design, commissioning and operation of the telescopes and of the facility, and for quality assurance of the telescope components and of the assembly procedures. "Risks" are any features that can be a threat to the success of the project. They can have negative effects on the cost, schedule and technical performance of CTA. The aim of project risk management is to identify, assess, reduce, accept (where necessary), and control project risks in a systematic and cost-effective manner, taking into account technical and programming constraints. Quality Assurance ensures a satisfactory level of quality for all steps of the design study. This level of quality is guaranteed by correct implementation of the pre-defined quality criteria and the participation of all the project actors. Including quality assurance and risk assessment from the very start of the design study phase will have a positive effect on the building schedule and cost of CTA. The objective of the design study is to develop telescopes which will be produced in series during the building phase, so the study will be done in partnership with industry. Quality assurance and risk assessment will ensure that the project will have good traceability and a good control of risks from the outset. This WP is managed by a coordinator who defines standards and quality methods for the project. To ensure the implementation of quality in the project laboratories, "Local Quality Correspondents" (LQCs) will be identified and trained. These people will dedicate part of their time to quality issues, proportional to their laboratory participation in the overall project. The main tasks of the WP participants are: To define the quality insurance organisation (the roles of the participants) To ensure that quality control and risk analysis procedures are defined and applied uniformly across the project to ensure high quality and reliability of hardware and software To ensure that the risk analysis, including dependability (reliability, availability, maintenance and safety) is defined based on the technical configuration proposed To ensure support and expertise to implement the quality system and associated tools across the project To verify the coherence of the procedures and protocols in order to approve them for subsequent release and use To verify the application of the quality procedures across the project To identify and reduce technical and management risks Quality assurance and risk assessment concern the whole project. Thus, the members of "Quality assurance and risk assessment" will have active links to all work packages, to the project management and to all laboratories involved in building parts of CTA. 10 CTA site selection Selection of sites for CTA is obviously crucial for achieving optimum performance and science output. Criteria for site selection include, among others, geographical conditions, observational and environmental conditions and questions of logistics, accessibility, availability, stability of the host region, and local support: Geographical conditions For best sky coverage, the latitude of the sites should be around 30° north and south, respectively. The sites have to provide a reasonably flat area of about 1 km2 (north) and at least 10 km2 (south). Optimum overall performance is obtained for site altitudes between about 1,500 and 4,000 m. Even higher altitudes allow further reduction of the energy threshold [93] at the expense of performance at medium and high energies and might be considered for the northern array. Desirable is also a low component of geomagnetic field parallel to the surface, since such fields deflect air shower particles. Observational conditions Obviously, the fraction of clear nights should be high. For good sites, this fraction is well above 60%, reaching up to 80% for the very best sites. Artificial light pollution must be well below the natural level of night sky background, which excludes sites within some tens of km of major population centres. Atmospheric transparency should be good, implying dry locations with low amounts of aerosols and dust in the atmosphere. Environmental conditions Environment and climate influences both the operational efficiency and the survival conditions of the instrument. Wind speeds above 10 m/s may impact observations; peak wind speeds, which may range from below 100 km/h to beyond 200 km/h depending on the site, have a major impact on telescope structure and cost. Sand storms and hail represent a major danger for unprotected mirror surfaces. Snow and ice prevent observations and will influence instrument costs, e.g. by making heating systems necessary and requiring increased structural stability. Seismic activity will similarly increase requirements on telescope structures and buildings. Infrastructure and logistics A well-developed infrastructure, e.g. as a result of already existing observatories, is an advantage. Connection to the power grid and high speed internet access are mandatory. There should be good access to the site, i.e. nearby airports for air travel to/from Europe and elsewhere, and local access roads. A major population centre with technical and commercial infrastructure within convenient travel distance is desirable. Other criteria These include availability of the site for construction, guarantees for long-term operation and access, political stability of the host region, safety of personnel, both during travel and stay, and availability of local administrative, technical and funding support as well as possibilities for scientific cooperation with local groups. For both the observational and the environmental conditions, a long-term (multi-year) data record is required to allow dependable decisions o be made. While archival remote-sensing data can provide some information, well-explored sites with existing installations and good records are favoured. It is unlikely that any site is optimal in all respects, so the different criteria will have to be balanced against each other. Reliable and efficient operation of the observatory should be a key criterion. Site evaluation includes a number of different approaches, at different stages of progress for a candidate site: Use of remote-sensing archival data and local archival data to evaluate observing conditions and environmental conditions. Site visits and information gathering by local collaborating groups on logistics aspects. Dedicated CTA measurements; since long-term measurements are excluded, this approach is useful only for those quantities where short campaigns can provide meaningful results, such as the determination of natural and artificial night-sky brightness. A first preselection can look for sufficiently large and flat areas above 1,500 m a.s.l. (based on a topological model of the Earth [116]), with the requirement that the artificial background light is minimal (as determined from satellite images [117]), and that average cloud coverage is less than 40% (as provided by the International Satellite Cloud Climatology Project, ISCCP, based on the analysis of satellite data (http://isccp.giss.nasa.gov/products/dataview.html)). The resulting map (Fig. 54) shows very few locations matching these basic criteria, among them the well-known sites in Chile and Namibia. However, while the ISCCP data has the advantage of covering the whole planet, the resolution is relatively coarse and sites with very local conditions (such as mountain tops) may deviate significantly from the "pixel" average. Also, daytime and nighttime cloud cover will usually be different. Only the latter is relevant for Cherenkov astronomy. For identification of potential observatory sites, special algorithms and high-resolution data have been provided by Erasmus [118] for the identification of potential observatory sites, but only for selected areas, such as the Chilean sites, the Indian site at Hanle or the Yanbajing site in Tibet. Similar searches are conducted using MODIS and ISCCP maps as well as the recently released ESO application FriOwl that provides access to an extensive database of information from the last 40 years (http://archive.eso.org/friowl-45/). Green areas indicate sites above 1,500 m a.s.l., which offer sufficiently flat areas, minimal artificial background light and an average cloud cover of <40%, selected on the basis of topological and satellite data Based on these preliminary evaluations, potentially interesting sites have been selected at which detailed studies will be conducted in the coming months. Northern site candidates are: Canary Islands La Palma and Tenerife These are well-known and well-explored observatory sites at about 26° N, about 2,400 m a.s.l., with the Observatorio del Roque de los Muchachos on La Palma and the Observatorio del Teide on Tenerife. Hanle in India, in the Western Himalayas This high-altitude site (33° N, 4,500 m a.s.l.) hosts a small observatory and an array of Cherenkov instruments which is deployed by Indian groups. San Pedro Martir, Baja California Well-established astronomical site that hosts already two observatories run by UNAM (Universidad Autonoma de Mexico). It is situated at about 31° N, at 2,800 m a.s.l. Southern site candidates are: Khomas Highland of Namibia This is a well-known astronomical site, at 1,800 m a.s.l. and 23°S, and is the home of the H.E.S.S. instrument. The region offers a range of suitable, large and flat areas. Chilean sites Chile is home to some of the World's premier optical observatories. However, availability of sufficiently large cites near these locations is limited. A possible site is north of La Silla at 29°S and 2,400 m a.s.l. Another potential site is near Cerro Paranal, with even better observing conditions, but no sufficiently flat area in this region has been identified so far. El Leoncito Reserve in Argentina This site is at 32°S and 2,600 m a.s.l. and hosts the El Leoncito Astronomical Observatory. Puna Highland in Argentina The region offers some large sites at 3,700 m a.s.l. with sky quality equivalent to the best Chilean ones. These sites have good access to a railway line. The final decision among otherwise identical sites may rely on considerations such as financial or in-kind contributions by the host regions. It is likely that an inter-governmental agreement will be required to assure long-term availability of the site, as well as guaranteed access and free transfer of data. At the same level, issues such as import taxes, value added tax and fees etc. should be addressed. Such agreements exist, for H.E.S.S., Auger and other observatories operated by international collaborations. 11 Outlook The Cherenkov Telescope Array was conceived back in 2005, and was then promoted by members of the HESS and MAGIC collaborations. It was soon apparent that a gamma-ray observatory could be designed with existing technologies that was much more powerful than any of the existing facilities. An improvement of a factor of 10 in sensitivity around a TeV and an extension of the energy range from a few tens of GeV to >100 TeV, well beyond the currently accessible range, was achievable with an array of a large number (≈100) of differently sized telescopes. With the results from current Cherenkov telescopes pouring in, it became obvious that with such an instrument a vast number of sources of very different types could be discovered and studied with unprecedented precision. Answers to long-standing questions in a number of science areas seemed possible. The extent and the diversity of the science case was, and is, stunning (see Section 3). CTA would truly be the first large open observatory for astronomy of the extreme universe beyond the GeV range. Not surprisingly, many scientists were attracted to CTA and its science grew rapidly, as did the number of supporters who now form a large international collaboration which is investigating how best to realise the project. CTA has received consistently excellent reviews and high rankings in Science Roadmaps in Europe and across the world. CTA is an acknowledged ESFRI project, features high on the roadmaps of future projects of ApPEC, ASPERA and ASTRONET and has been well received by national funding agencies. The potential of CTA is well recognised outside Europe, with the USA, Japan, India, Brazil and Argentina, and other countries contributing significantly. The US Decadal Survey endorsed a strong US participation in CTA as one of the four most important ground based initiatives in the next ten years. Since 2006, and specifically in a 4-year design study, it has been shown that CTA, with observatories in the northern and southern hemisphere, can be built to achieve its goal performance, at an investment cost in the range of 150 M€, which is a modest price for an installation of such scientific potential. CTA has recently received substantial funding from the European Community, for preparing for construction and operation, and from national funding agencies, for development and prototyping. There is much excitement amongst all participants, and the wider science community, about the prospects that CTA will soon from design to reality. In this report, an account of the main design work performed so far is presented, which constitutes a solid basis for the prototyping and construction phases that lie ahead. The Preparatory Phase (3 years) and the subsequent construction phase (2013–2018) will pose many challenges. But CTA is a well-organised international collaboration of 25 countries and >600 scientists with extensive expertise in all relevant areas. Its members are eager and ready to tackle the problems that lie ahead. This effort is well worth it, as CTA will provide a huge science return in astrophysics, particle physics, cosmology and fundamental physics, and lead to a bright future for ground-based gamma ray astronomy. 1 GeV = 109 eV; 1 TeV = 1012 eV; 1 PeV = 1015 eV. CTA was first publicly presented to an ESFRI panel in Autumn 2005. There are several clear cases of blazar SEDs where the X-ray peak and the γ-ray peak, with their correlated luminosity and spectral changes, are interpreted within a synchrotron-self-Compton (SSC) model [30] as the synchrotron and IC peak respectively, produced by a time-varying population of particles (e.g. [31, 32]). A variant of the Compton scenario considers that the soft photons produced externally to the jet may be more effective than the internal ones (external Compton, EC, model; e.g. [33, 34]). Models based on hadronic acceleration (e.g. [35, 36]) can also reproduce the blazar SEDs and lightcurves. Particle Astrophysics Scientific Assessment Group of the High Energy Physics Advisory Panel Due to Rayleigh and Mie scattering the actual spectrum at the camera is considerably flatter (see Fig. 43). At large zenith angles the UV part of the spectrum might no longer be detectable. We gratefully acknowledge financial support from the following agencies and organisations: Ministerio de Ciencia, Tecnología e Innovación Productiva (MinCyT), Comisión Nacional de Energía Atómica (CNEA) and Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) Argentina; State Committee of Science of Armenia; Ministry for Research, CNRS-INSU, CNRS-IN2P3 and CEA, France; Max Planck Society, BMBF, DESY, Helmholtz Association, Germany; MIUR, Italy; Netherlands Research School for Astronomy (NOVA), Netherlands Organization for Scientific Research (NWO); Ministry of Science and Higher Education and the National Centre for Research and Development, Poland; MICINN support through the National R+D+I, CDTI funding plans and the CPAN and MultiDark Consolider-Ingenio 2010 programme, Spain. Swedish Research Council, Royal Swedish Academy of Sciences financed, Sweden; Swiss National Science Foundation (SNSF), Switzerland; Leverhulme Trust, Royal Society, Science and Technologies Facilities Council, Durham University, UK; National Science Foundation, Department of Energy, Argonne National Laboratory, University of California, University of Chicago, Iowa State University, Institute for Nuclear and Particle Astrophysics (INPAC-MRPI program), Washington University McDonnell Center for the Space Sciences, USA. This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. See, e.g., Aharonian, F.A.: The H.E.S.S. survey of the inner galaxy in very high energy gamma rays. Astrophys. J. 636, 777 (2006) (and subsequent presentations in conferences)Google Scholar Barrau, A., et al.: The CAT imaging telescope for very-high-energy gamma-ray astronomy. NIMPA 416, 278 (1998)CrossRefADSGoogle Scholar See, e.g., Aharonian, F.A.: The energy spectrum of TEV gamma rays from the Crab Nebula as measured by the HEGRA system of imaging air cerenkov telescopes. Astrophys. J. 539, 317 (2000)Google Scholar Aharonian, F.A., Völk, H.J., Horns, D. (eds.): Proc. 2nd Int. Meeting on High Energy Gamma-Ray Astronomy Heidelberg (2004). AIP Conference Proceedings, vol. 745 (2004)Google Scholar Aharonian, F.A., Hofmann, W., Rieger, F. (eds.): Proc. 4th Int. Meeting on High Energy Gamma-Ray Astronomy, Heidelberg (2008). AIP Conference Proceedings, vol. 1085 (2008)Google Scholar Hinton, J.A., Hofmann, W.: Teraelectronvolt astronomy. Annu. Rev. Astron. Astrophys. 47, 523 (2009)CrossRefADSGoogle Scholar Aharonian, F.A., et al.: High energy astrophysics with ground-based gamma ray detectors. Rep. Prog. Phys. 71, 096901 (2008)CrossRefGoogle Scholar Buckley, J., et al.: The Status and future of ground-based TeV gamma-ray astronomy. A White Paper prepared for the division of Astrophysics of the American physical society, AGIS White Paper. arXiv:0810.0444v1 2nd Fermi Symposium, Washington, Capitol Hill, eConf Proceedings C091122. http://www.slac.stanford.edu/econf/C0911022/ (2009) Reynolds, S.P.: Supernova remnants at high energy. Annu. Rev. Astron. Astrophys. 46, 89 (2008)CrossRefADSGoogle Scholar Malkov, M.A., Drury, L.O'C.: Nonlinear theory of diffusive acceleration of particles by shock waves. Rep. Prog. Phys. 64, 429 (2001)CrossRefADSGoogle Scholar Aharonian, F.A., Atoyan, A.M.: On the emissivity of π 0-decay gamma radiation in the vicinity of accelerators of galactic cosmic rays. Astron. Astrophys. 309, 917 (1996)ADSGoogle Scholar Gabici, S., Aharonian, F.A., Blasi, P.: Gamma ray signatures of ultra high energy cosmic ray accelerators: electromagnetic cascade versus synchrotron radiation of secondary electrons. Astrophys. Space Sci. 309, 365 (2007)CrossRefADSGoogle Scholar 31st ICRC, Łódź, Poland. http://icrc2009.uni.lodz.pl/ (2009) Aharonian, F.A., et al.: Energy dependent γ-ray morphology in the pulsar wind nebula HESS J1825-137. Astron. Astrophys. 460, 365 (2006)CrossRefADSGoogle Scholar Dalton, M., et al.: Presentation at TeV Particle Astrophysics 2010, Paris (2010)Google Scholar Aharonian, F.A., et al.: First detection of a VHE gamma-ray spectral maximum from a cosmic source: HESS discovery of the Vela X nebula. Astron. Astrophys. 448, L43 (2006)CrossRefADSGoogle Scholar Gaensler, B.M., Slane, P.O.: The evolution and structure of pulsar wind nebulae. Annu. Rev. Astron. Astrophys. 44, 17 (2006)CrossRefADSGoogle Scholar de Jager, O.C., Djannati-Atai, A.: In: Becker, W. (ed.) Springer Lecture Notes on "Neutron Stars and Pulsars: 40 Years After Their Discovery". arXiv:0803.0116v1 (2008) Aharonian, F.A., et al.: Very high energy gamma rays from the direction of Sagittarius A*. Astron. Astrophys. 425, L13 (2004)CrossRefADSGoogle Scholar Albert, J., et al.: Observation of gamma rays from the galactic center with the MAGIC telescope. Astrophys. J. 638 (2006) L101 638, L101 (2006)CrossRefADSGoogle Scholar Zhang, J.L., Bi, X.J., Hu, H.B.: Very high energy γ ray absorption by the galactic interstellar radiation field. Astron. Astrophys. 449, 641 (2006)CrossRefADSGoogle Scholar Abdo, A., et al.: Fermi LAT Observations of LS I +61'303: First Detection of an Orbital Modulation in GeV Gamma Rays. Astrophys. J. 701, L123 (2009)CrossRefADSGoogle Scholar Fender, R.: Jets from X-ray binaries. In: Lewin, W.H.G., van der Klis, M. (eds.) Compact Stellar X-Ray Sources. Cambridge University Press, Cambridge (2003)Google Scholar Levinson, A.: High-energy aspects of astrophysical jets. Int. J. Mod. Phys. A21, 6015 (2006)ADSGoogle Scholar Butt, Y.: Beyond the myth of the supernova-remnant origin of cosmic rays. Nature 460, 701 (2009)CrossRefADSGoogle Scholar Marti, J.: Proc. High Energy Phenomena in Massive Stars. University of Jaen. ASP Conference Series, vol. 422. ISBN: 978-1-58381-724-7 (2010)Google Scholar Aliu, E., et al.: Science 322, 1222 (2009)Google Scholar Rea, N., Torres, D.F. (eds.): Proc. 1st Session of the Sant Cugat Forum of Astrophysics: ICREA International Workshop on The High-Energy Emission from Pulsars and Their Systems. Springer. ISSN:1570-6591 (2010)Google Scholar Jones, T.W., O'Dell, S.L., Stein, W.A.: Physics of compact nonthermal sources. Theory of radiation rocesses. Astrophys. J. 188, 353 (1974)CrossRefADSGoogle Scholar Pian, E., et al.: BeppoSAX observations of unprecedented synchrotron activity in the BL lacertae object markarian 501. Astrophys. J. 492, L17 (1998)CrossRefADSGoogle Scholar Kino, M., Takahara, F., Kusunose, M.: Energetics of TeV blazars and physical constraints on their emission regions. Astrophys. J. 564, 97 (2002)CrossRefADSGoogle Scholar Dermer, C.D., Schlickeiser, R.: Model for the high-energy emission from blazars. Astrophys. J. 416, 458 (1993)CrossRefADSGoogle Scholar Sikora, M., Begelman, M.C., Rees, M.J.: Comptonization of diffuse ambient radiation by a relativistic jet: The source of gamma rays from blazars? Astrophys. J. 421, 153 (1994)CrossRefADSGoogle Scholar Mannheim, K., Biermann, P.L., Kruells, W.M.: A novel mechanism for nonthermal X-ray emission. Astron. Astrophys. 251, 723 (1991)ADSGoogle Scholar Aharonian, F.A.: TeV gamma rays from BL Lac objects due to synchrotron radiation of extremely high energy protons. New Astron. 5, 377 (2000)CrossRefADSGoogle Scholar See, e.g., Aharonian, F.A., et al.: Fast variability of tera-electron volt γ rays from the radio galaxy M87. Science 314, 1424 (2006)Google Scholar See, e.g., Miralda Escude, J.: The dark age of the universe. Science 300, 1904 (2000)Google Scholar Romero, G.E., Aharonian, F.A., Paredes, J.M. (eds.): Proc. High-Energy Phenomena in Relativistic Outflows II, Buenos Aires, Argentina (2009); Int. J. Mod. Phys. D19, 635–1048 (2010); http://hepro2.iar-conicet.gov.ar/talks.html Krolik, J.H., Pier, E.A.: Relativistic motion in gamma-ray bursts. Astrophys. J. 373, 277 (1991)CrossRefADSGoogle Scholar Fenimore, E.E., Epstein, R.I., Ho, C.: The escape of 100 MeV photons from cosmological gamma-ray bursts. Astron. Astrophys. 97, 59 (1993)ADSGoogle Scholar Woods, E., Loeb, A.: Empirical constraints on source properties and host galaxies of cosmological gamma-ray bursts. Astrophys. J. 453, 583 (1995)CrossRefADSGoogle Scholar See, e.g., Berger, E., et al.: The afterglow and elliptical host galaxy of the short γ-ray burst GRB 050724. Nature 438, 988 (2005)Google Scholar See, e.g., MacFadyen, A.I., Woosley, S.E.: Collapsars: Gamma-ray bursts and explosions in "failed supernovae", Astrophys. J. 524, 262 (1999)Google Scholar See, e.g., Blasi, P., Amato, E., Caprioli, D.: The maximum momentum of particles accelerated at cosmic ray modified shocks. Mon. Not. R. Astron. Soc. 375, 1471 (2007)Google Scholar Bertone, G. (eds.): Particle Dark Matter. Cambridge University Press, Cambridge. ISBN 13:9780521763684 (2010)zbMATHGoogle Scholar Adriani, O., et al.: An anomalous positron abundance in cosmic rays with energies 1.5-100GeV. Nature 458, 607 (2009)CrossRefADSGoogle Scholar Abdo, A., et al.: Measurement of the Cosmic Ray e + +e − Spectrum from 20GeV to 1TeV with the Fermi Large Area Telescope. Phys. Rev. Lett. 102, 181101 (2009)CrossRefADSGoogle Scholar Abdo, A., et al.: A limit on the variation of the speed of light arising from quantum gravity effects. Nature 462, 331 (2009)CrossRefADSGoogle Scholar Martinez, M.: Fundamental physics with cosmic gamma rays. J. Phys. Conf. Ser. 171, 012013 (2009)CrossRefADSGoogle Scholar Hanbury Brown, R.: The intensity interferometer. Taylor & Francis, London (1974)Google Scholar Le Bohec, S., Holder, J.: Optical intensity interferometry with atmospheric cerenkov telescope arrays. Astrophys. J. 649, 399 (2006)CrossRefADSGoogle Scholar Herbst, W., Shevchenko, K.S.: A photometric catalog of herbig AE/BE Stars and discussion of the nature and cause of the variations of UX orionis stars. Astron. J. 118, 1043 (1999)CrossRefADSGoogle Scholar Kieda, D.B., Swordy, S.P., Wakely, S.P.: A high resolution method for measuring cosmic ray composition beyond 10 TeV, Astropart. Astropart. Phys. 15, 287 (2001)CrossRefADSGoogle Scholar Aharonian, F.A., et al.: First ground-based measurement of atmospheric Cherenkov light from cosmic rays. Phys. Rev. D 75, 042004 (2007)CrossRefADSGoogle Scholar Hinton, J.: Ground-based gamma-ray astronomy with Cherenkov telescopes. New J. Phys. 11, 055005 (2009)CrossRefGoogle Scholar Hinton, J., et al. (H.E.S.S. Collaboration): Background modelling in ground-based Cherenkov astronomy. In: Proc. Towards a Network of Atmospheric Cherenkov Detectors VII, Palaiseau, France, pp. 183–190 (2005)Google Scholar Albert, J., et al. (MAGIC Collaboration): Very High Energy Gamma-Ray Observations During Moonlight and Twilight with the MAGIC Telescope [astro-ph/0702475]Google Scholar Hofmann, W.: Performance limits for Cherenkov instruments. In: Proc. Towards a Network of Atmospheric Cherenkov Detectors VII, Palaiseau, France [astro-ph/0603076] (2005)Google Scholar Maier, G., Knapp, J.: Cosmic-ray events as background in imaging atmospheric Cherenkov telescopes. Astropart. Phys. 28, 72 [astro-ph/0704.3567] (2007)CrossRefADSGoogle Scholar Amsler, C., et al. (Particle Data Group): Phys. Lett. B 667, 1 (2008) (Section 27.5. Electromagnetic cascades, p. 276)CrossRefADSGoogle Scholar Aharonian, F.A., et al. (H.E.S.S. Collaboration): First ground-based measurement of atmospheric Cherenkov light from cosmic rays. Phys. Rev. D 75, 042004 (2007)CrossRefADSGoogle Scholar Sahakian, V., Aharonian, F., Akhperjanian, A.: Cherenkov light in electron-induced air showers. Astropart. Phys. 25, 233 (2006)CrossRefADSGoogle Scholar Aliu, E., et al.: Improving the performance of the single-dish Cherenkov telescope MAGIC through the use of signal timing. Astropart. Phys. 30, 293 (2009)CrossRefADSGoogle Scholar Schliesser, A., Mirzoyan, R.: Wide-field prime-focus imaging atmospheric Cherenkov telescopes: A systematic study. Astropart. Phys. 24, 382 (2005)CrossRefADSGoogle Scholar Vassiliev, V.V., Fegan, S.J., Brousseau, P.F.: Wide field aplanatic two-mirror telescopes for ground-based γ-ray astronomy. Astropart. Phys. 28, 10 [astro-ph/0612718] (2007)CrossRefADSGoogle Scholar Aharonian, F., et al.: On the optimization of multichannel cameras for imaging atmospheric Cherenkov telescopes. J. Phys. G 21, 985 (1995)CrossRefADSGoogle Scholar de Naurois, M., Rolland, L.: A high performance likelihood reconstruction of γ-rays for imaging atmospheric Cherenkov telescopes. Astropart. Phys. 32, 231 (2009)CrossRefADSGoogle Scholar Albert, J., et al.: FADC signal reconstruction for the MAGIC telescope. Nucl. Instrum. Methods A594, 407 (2008)ADSGoogle Scholar Mirzoyan, R., et al.: Tagging single muons and other long-flying relativistic charged particles by ultra-fast timing in air Cherenkov telescopes. Astropart. Phys. 25, 342 (2006)CrossRefADSGoogle Scholar Albert, J., et al. (MAGIC Collaboration): FADC signal reconstruction for the MAGIC telescope. Nucl. Instrum. Methods A594, 407–419 [astro-ph/0612385] (2008)ADSGoogle Scholar Funk, S., et al.: The trigger system of the H.E.S.S. telescope array. Astropart. Phys. 22, 285 [astro-ph0408375] (2004)CrossRefADSGoogle Scholar Heck, D., et al.: CORSIKA: a Monte Carlo code to simulate extensive air showers. Technical Report FZKA 6019, Forschungszentrum Karlsruhe. http://www-ik.fzk.de/corsika/ (1998) Kertzman, M.P., Sembroski, G.H.: Computer simulation methods for investigating the detection characteristics of TeV air Cherenkov telescopes. Nucl. Instrum. Methods A343, 629 (1994)ADSGoogle Scholar Bernlöhr, K.: Simulation of imaging atmospheric Cherenkov telescopes with CORSIKA and sim_telarray. Astropart. Phys. 30, 149 (2008)CrossRefADSGoogle Scholar Guy, J.: Premiers résultats de l'expérience HESS et étude du potentiel de détection de matière noire supersymétrique. Thèse de doctorat, Université Paris V (2003)Google Scholar Majumdar, P., et al.: Monte Carlo simulations for the MAGIC telescope. In: Proc. 29th ICRC, Pune, vol. 5, p. 203 (2005)Google Scholar Hillas, M.: Cerenkov light images of EAS produced by primary gamma. Proc. 19th ICRC, La Jolla, vol. 3, p. 445 (1985)Google Scholar Bock, R.K., Chilingarian, A., Gaug, M., et al.: Methods for multidimensional event classification: a case study using images from a Cherenkov gamma-ray telescope. Nucl. Instrum. Methods A516, 511 (2004)ADSGoogle Scholar Ohm, S., van Eldik, C., Egberts, K.: γ/hadron separation in very-high-energy γ-ray astronomy using a multivariate analysis method. Astropart. Phys. 31, 383 (2009)CrossRefADSGoogle Scholar Fiasson, A., Dubois, F., Masbou, J., Lamanna, G., Rosier-Lees, S.: Optimization of multivariate analysis for IACT stereoscopic systems. Astropart. Phys. 34, 25 (2010)CrossRefADSGoogle Scholar Lemoine-Goumard, M., Degrange, B., Tluczykont, M.: Selection and 3D-reconstruction of gamma-ray-induced air showers with a stereoscopic system of atmospheric Cherenkov telescopes. Astropart. Phys. 25, 195 (2006)CrossRefADSGoogle Scholar Cornils, R., et al.: The optical system of the H.E.S.S. imaging atmospheric Cherenkov telescopes. Part II: mirror alignment and point spread function. Astropart. Phys. 20, 129 (2003)CrossRefADSGoogle Scholar Aharonian, F.A., et al.: An exceptional very high energy gamma-ray flare of PKS 2155-304. Astrophys. J. 664, L71 (2007)CrossRefADSGoogle Scholar Battistoni, G., et al.: The FLUKA code: description and benchmarking. Proc. Hadronic Shower Simulation Workshop 2006, Fermilab 2006. In: Albrow, M., Raja, R. (eds.) AIP Conf. Proc., vol. 896, p. 31 (2007); Fasso, A., et al.: CERN-2005-10 (2005), INFN/TC_05/11, SLAC-R-773Google Scholar Bass, S.A., et al.: Microscopic models for ultrarelativistic heavy ion collisions. Prog. Part. Nucl. Phys. 41, 225 (1998); Bleicher, M., et al.: Relativistic hadron-hadron collisions in the ultra-relativistic quantum molecular dynamics model. J. Phys. G: Nucl. Part. Phys. 25, 1859 (1999)Google Scholar Kalmykov, N.N., et al.: Quark-gluon string model and EAS simulation problems at ultra-high energies. Nucl. Phys. B (Proc. Suppl.) 52B, 17 (1997)CrossRefADSGoogle Scholar Ostapchenko, S.S.: QGSJET-II: towards reliable description of very high energy hadronic interactions & QGSJET-II: results for extensive air showers. Nucl. Phys. B (Proc. Suppl.) 151 143 and 147 (2006)CrossRefADSGoogle Scholar Ostapchenko, S.S.: Nonlinear screening effects in high energy hadronic interactions. Phys. Rev. D 74, 014026 (2006)CrossRefADSGoogle Scholar Engel, R., et al.: Air shower calculations with the new version of SIBYLL. Proc. 26th ICRC (Salt Lake City), vol. 1, p. 415 (1999)Google Scholar Aharonian, F.A., et al.: Energy Spectrum of Cosmic-Ray Electrons at TeV Energies. Phys. Rev. Lett. 101, 261104 (2008)CrossRefADSGoogle Scholar Bernlöhr, K., et al.: MC Simulation and Layout Studies for a future Cherenkov Telescope Array. Proc. 30th ICRC, Meŕida, vol. 3, p. 1469 (2007)Google Scholar Aharonian, F.A., et al.: 5@5 - a 5 GeV energy threshold array of imaging atmospheric Cherenkov telescopes at 5 km altitude. Astropart. Phys. 15, 335 (2001)CrossRefADSGoogle Scholar Konopelko, A.: Altitude effect in C̆erenkov light flashes of low energy gamma-ray-induced atmospheric showers. J. Phys. G: Nucl. Part. Phys. 30, 1835 (2004)CrossRefADSGoogle Scholar Funk, S., Hinton, J.A.: Monte-Carlo studies of the angular resolution of a future Cherenkov gamma-ray telescope. Proc. 4th Heidelberg Int. Symposium on High Energy Gamma-Ray Astronomy 2008Google Scholar Maier, G., et al. (AGIS Collaboration): The Advanced Gamma-ray Imaging System (AGIS): Simulation Studies. Proc. 32nd ICRC, Łódź, Poland. arXiv:0907.5118v1 (2009) Funk, S.: A new population of very high-energy gamma-ray sources detected with H.E.S.S. in the inner part of the Milky Way. PhD Thesis, University of Heidelberg (2005)Google Scholar Schwarzschild, K.: Untersuchungen zur geometrischen Optik II: Theorie der Spiegeltelescope (1905); Abhandlungen der Gesellschaft der Wissenschaften in Göttingen, Bd 4, 2 (1905) 1Google Scholar Ritchey, G.W.: The Modern Photographic Telescope and the New Astronomical Photography. Comptes Rendus 185, 1024 (1927)Google Scholar Davies, J.M., Cotton, E.S.: Design of the quartermaster solar furnace. J. Sol. Energy Sci. Eng. 1, 16 (1957)CrossRefGoogle Scholar Bernlöhr, K., et al.: The optical system of the H.E.S.S. imaging atmospheric Cherenkov telescopes. Part I: layout and components of the system. Astropart. Phys. 20, 111 (2003)CrossRefADSGoogle Scholar Biland, A., et al. (MAGIC Collaboration): The active mirror control of the MAGIC Telescopes. Proc. 30th ICRC Merida, vol. 3, p. 1353 (2007)Google Scholar Doro, M., et al.: The reflective surface of the MAGIC telescope. Nucl. Instrum. Methods A595, 200 (2008)ADSGoogle Scholar Pareschi, G., et al.: Glass mirrors by cold slumping to cover 100 m2 of the MAGIC II Cherenkov telescope reflecting surface. Proc. SPIE 7018, 70180W (2008)CrossRefGoogle Scholar Holder, J., et al.: The first VERITAS telescope. Astropart. Phys. 25, 391 (2006)CrossRefADSMathSciNetGoogle Scholar Cogan, P., for the VERITAS Collaboration: Analysis of flash ADC data with VERITAS. arXiv:0709.4208v2 Punch, M., et al.: GigaHertz analogue memories in ground-based gamma-ray astronomy. AIP Conf. Proc. 515, 373 (2000)CrossRefADSGoogle Scholar Delagnes, E., et al.: A new GHz sampling ASIC for the H.E.S.S.-II front-end electronics. Nucl. Instrum. Methods A567, 21 (2006)ADSGoogle Scholar Tescaro, D., et al.: The readout system of the MAGIC-II Cherenkov telescope. arXiv:0907.0466 Hermann, G., et al.: A trigger and readout scheme for future Cherenkov Telescope Arrays. Proc. 4th Heidelberg Int. Symposium on High Energy Gamma-Ray Astronomy 2008. arXiv:0812.0762 Aliu, E., et al. (MAGIC Collaboration): Observation of pulsed γ-Rays above 25 GeV from the Crab Pulsar with MAGIC. Science 322, 1221 (2008)CrossRefADSGoogle Scholar Aharonian, F., et al. (H.E.S.S. Collaboration): Calibration of cameras of the H.E.S.S. detector. Astrop. Phys. 22, 109 (2004)CrossRefADSGoogle Scholar Digel, S., et al. for the AGIS Collaboration: Talk at 2009 SLAC Users Organisation Meeting. http://www-group.slac.stanford.edu/sluo/2009AnnualMeeting/Talks/Tajima.pdf Hanna, D., et al.: An LED-based flasher system for VERITAS. Nucl. Instrum. Methods A612, 278 (2009)ADSGoogle Scholar Papayannis, A., Fokitis, E.: Pierre Auger Observatory. Internal Note (1998) GAP 1998-018Google Scholar U.S. Geological Survey, GTOPO30, http://eros.usgs.gov/products/elevation/gtopo30.html Cinzano, P., Falchi, F., Elvidge, C.D.: The first World Atlas of the artificial night sky brightness. MNRAS 328, 689 (2001)CrossRefADSGoogle Scholar Erasmus, D.A.: An analysis of cloud cover and water vapor for the ALMA project. www.eso.org/gen-fac/pubs/astclim/espas/radioseeing/ (2002); An analysis and comparison of satellite-observed cloud cover and water vapor at Hanle, India and Yanbajing, Tibet. http://www.saao.ac.za/∼erasmus/Projects/MPIHimalaya/MPIHimalaya_overview.html (2004) 1.Centro Atómico Bariloche (CNEA-CONICET-IB/UNCuyo)(8400) San Carlos de BarilocheArgentina 2.Instituto de Astronomía y Física del Espacio (CONICET-UBA)Ciudad de Buenos AiresArgentina 3.UID GEMA - Departamento de Aeronáutica (Facultad de Ingeniera, UNLP)La Plata (1900)Argentina 4.Instituto de Tecnologías en Detección y Astropartículas (CNEACONICET-UNSAM)(1650) Buenos AiresArgentina 5.Observatorio MetereológicoMendozaArgentina 6.CEILAP (CITEDEF-CONICET)Villa MartelliArgentina 7.Instituto Argentino de Radioastronomía (CCT La Plata, CONICET)La PlataArgentina 8.Alikhanyan National Science LaboratoryYerevanArmenia 9.Institut für Astro- und TeilchenphysikLeopold-Franzens-Universität InnsbruckInnsbruckAustria 10.Centro Brasileiro de Pesquisas FísicasRio de JaneiroBrazil 11.Instituto de FísicaUniversidade Federal do Rio de JaneiroRio de JaneiroBrazil 12.Instituto de Astronomia, Geofísico, e Ciências AtmosféricasUniversidade de São PauloSão PauloBrazil 13.Instituto de Física de São CarlosUniversidade de São PauloSão CarlosBrazil 14.Institute for Nuclear Research and Nuclear Energy, BASSofiaBulgaria 15.Astronomy DepartmentSofia UniversitySofiaBulgaria 16.Institute of Astronomy and NAOSofiaBulgaria 17.Rudjer Boskovic InstituteZagrebCroatia 18.Faculty of Mathematics and Physics, Institute of Particle and Nuclear PhysicsCharles UniversityPrague 8Czech Republic 19.Tuorla ObservatoryUniversity of TurkuPiikkiöFinland 20.APC, Bâtiment Condorcet, UMR 7164 (CNRS, Université Paris 7 Denis Diderot, CEA, Observatoire de Paris)Paris Cedex 13France 21.CEA/DSM/IRFU, CEA-SaclayGif-sur-YvetteFrance 22.Université de Toulouse, UPS-OMP, IRAPToulouseFrance 23.CNRS, IRAPToulouse cedex 4France 24.UJF-Grenoble 1 / CNRS-INSUInstitut de Planétologie et d'Astrophysique de Grenoble (IPAG) UMR 5274GrenobleFrance 25.Laboratoire d'Annecy-le-Vieux de Physique des ParticulesUniversité de Savoie, CNRS/IN2P3Annecy-le-VieuxFrance 26.Laboratoire Leprince-Ringuet (LLR), École Polytechnique, UMR 7638 (CNRS, École Polytechnique)PalaiseauFrance 27.Laboratoire Univers et Particules de MontpellierUniversité Montpellier 2, CNRS/IN2P3Montpellier Cedex 5France 28.Observatoire de Paris, LUTH, CNRS, Université Paris-DiderotMeudonFrance 29.Observatoire de Paris, GEPI, CNRSUniversité Paris-DiderotMeudonFrance 30.Observatoire de Paris, UMS, CNRSUniversité Paris-DiderotMeudonFrance 31.LPNHEUniversity of Pierre et Marie CurieParis Cedex 5France 32.CPPM, Aix-Marseille Université, CNRS/IN2P3Paris cedexFrance 33.Institut für Theoretische Physik, Lehrstuhl IV: Weltraum- und AstrophysikRuhr-Universität BochumBochumGermany 34.Universität Potsdam, Institut für Physik & AstronomiePotsdamGermany 35.DESYZeuthenGermany 36.Institut für PhysikHumboldt-Universität zu BerlinBerlinGermany 37.Department of PhysicsTU Dortmund UniversityDortmundGermany 38.Universität Erlangen-Nürnberg, Physikalisches InstitutErlangenGermany 39.Universität Hamburg, Institut für ExperimentalphysikHamburgGermany 40.LandessternwarteUniversität HeidelbergHeidelbergGermany 41.Max-Planck-Institut für KernphysikHeidelbergGermany 42.Max-Planck-Institut für PhysikMünchenGermany 43.Institut für Astronomie und Astrophysik, Kepler Center for Astro and Particle PhysicsEberhard-Karls-UniversitätTübingenGermany 44.Institute for Theoretical Physics and AstrophysicsUniversität WürzburgWürzburgGermany 45.Physics DepartmentAristotle University of ThessalonikiThessalonikiGreece 46.Department of Astrophysics, Astronomy and Mechanics, Faculty of PhysicsUniversity of Athens, PanepistimiopolisZografosGreece 47.Department of Physics, School of Applied Mathematical and Physical SciencesNational Technical University of AthensZografou AttikisGreece 48.Dublin Institute for Advanced StudiesDublin 2Ireland 49.INAF - Istituto di Astrofisica Spaziale e Fisica Cosmica di PalermoPalermoItaly 50.INAF - Istituto di Fisica dello Spazio Interplanetario di TorinoTorinoItaly 51.INAF - Osservatorio Astronomico di BolognaBolognaItaly 52.INAF - Osservatorio Astronomico di BreraMilanoItaly 53.INAF - Osservatorio Astrofisico di CataniaCataniaItaly 54.INAF - Osservatorio Astronomico di PadovaPadovaItaly 55.INAF - Osservatorio Astronomico di RomaMonteporzio CatoneItaly 56.INAF - Istituto di Astrofisica Spaziale e Fisica Cosmica di RomaRomaItaly 57.INAF - Telescopio Nazionale Galileo, Roque de Los Muchachos Astronomical ObservatoryGarafiaSpain 58.Università di Padova and INFNPadovaItaly 59.Università di Siena, and INFN PisaSienaItaly 60.University of Udine and INFN Sezione di TriesteUdineItaly 61.University of Udine and INFN Sezione di PadovaUdineItaly 62.Osservatorio Astronomico di Trieste and INFN Sezione di TriesteUdineItaly 63.Department of Physics and MathematicsAoyama Gakuin UniversityKanagawaJapan 64.Department of Physical ScienceHiroshima UniversityHiroshimaJapan 65.Hiroshima Astrophysical Science CenterHiroshima UniversityHiroshimaJapan 66.College of ScienceIbaraki UniversityIbarakiJapan 67.Institute of Space and Astronautical Science, JAXAKanagawaJapan 68.Institute of Particle and Nuclear StudiesKEK (High Energy Accelerator Research Organization)IbarakiJapan 69.Dept. of PhysicsKinki UniversityHigashi-OsakaJapan 70.School of Allied Health SciencesKitasato UniversityKanagawaJapan 71.Department of PhysicsKonan UniversityKobeJapan 72.Department of Astronomy, Graduate School of ScienceKyoto UniversityKyotoJapan 73.Department of Physics, Graduate School of ScienceKyoto UniversityKyotoJapan 74.Yukawa Institute for Theoretical PhysicsKyoto UniversityKyotoJapan 75.Department of Applied Physics, Faculty of EngineeringUniversity of MiyazakiMiyazakiJapan 76.Department of Physics and AstrophysicsNagoya UniversityNagoyaJapan 77.Kobayashi-Maskawa Institute (KMI) for the Origin of Particles and the UniverseNagoya UniversityNagoyaJapan 78.Solar-Terrestrial Environment LaboratoryNagoya UniversityNagoyaJapan 79.Department of Earth and Space Science, Graduate School of ScienceOsaka UniversityToyonakaJapan 80.Graduate School of Science and EngineeringSaitama UniversitySaitama CityJapan 81.Department of PhysicsTokai UniversityKanagawaJapan 82.Tokai University HospitalKanagawaJapan 83.Institute of Socio-Arts and SciencesThe University of TokushimaTokushimaJapan 84.Interactive Research Center of Science, Graduate School of ScienceTokyo Institute of TechnologyTokyoJapan 85.Institute for Cosmic Ray ResearchThe University of TokyoChibaJapan 86.Faculty of Science and EngineeringWaseda UniversityTokyoJapan 87.Yamagata UniversityYamagataJapan 88.Faculty of Management InformationYamanashi Gakuin UniversityYamanashiJapan 89.Department of PhysicsUniversity of NamibiaWindhoekNamibia 90.Astronomical InstituteUtrecht UniversityUtrechtThe Netherlands 91.Astronomical Institute "Anton Pannekoek"University of AmsterdamAmsterdamThe Netherlands 92.Faculty of Physics and Applied Computer ScienceUniversity of ŁódźŁódźPoland 93.Jagiellonian UniversityCracowPoland 94.Astronomical ObservatoryUniversity of WarsawWarsawPoland 95.Nicolaus Copernicus Astronomical CenterPolish Academy of SciencesWarsawPoland 96.Institute of Nuclear PhysicsPolish Academy of SciencesCracowPoland 97.Toruń Centre for AstronomyNicolaus Copernicus UniversityToruńPoland 98.Space Research CentrePolish Academy of SciencesWarsawPoland 99.Faculty of Electrical Engineering, Automatics, Computer Science and ElectronicsAGH University of Science and TechnologyCracowPoland 100.Academic Computer Centre CYFRONET AGHCracowPoland 101.Centre for Space ResearchNorth-West UniversityPotchefstroomSouth Africa 102.Instituto de Astrofísica de CanariasLa LagunaSpain 103.Departamento de AstrofśicaUniversidad de La LagunaLa LagunaSpain 104.CIEMATMadridSpain 105.Institut de Ciències de l'Espai (IEEC-CSIC)BarcelonaSpain 106.Institució Catalana de Recerca i Estudis Avançats (ICREA)BarcelonaSpain 107.IFAEBellaterraSpain 108.Departament de FísicaUniversitat Autònoma de BarcelonaBellaterraSpain 109.Departament d'Estructura i Constituents de la Matèria, Institut de Ciències del Cosmos (ICC)Universitat de Barcelona (IEEC-UB)BarcelonaSpain 110.Departament d'Astronomia i Meteorologia, Institut de Ciències del Cosmos (ICC)Universitat de Barcelona (IEEC-UB)BarcelonaSpain 111.Dept. ElectronicsUniversity Complutense of MadridMadridSpain 112.Dept. FAMNUniversidad Complutense de MadridMadridSpain 113.INSAINSA, European Space Astronomy Centre of ESAMadridSpain 114.Lund ObservatoryLundSweden 115.Oskar Klein Centre, Physics DepartmentStockholm UniversityStockholmSweden 116.Oskar Klein Centre, Astronomy DepartmentStockholm UniversityStockholmSweden 117.Oskar Klein Centre, Department of PhysicsRoyal Institute of Technology (KTH)StockholmSweden 118.Department of Physics and AstronomyUppsala UniversityUppsalaSweden 119.Laboratory for High Energy PhysicsÉcole Polytechnique FédéraleLausanneSwitzerland 120.ETH Zurich, Inst. for Particle PhysicsZurichSwitzerland 121.ISDC Data Centre for Astrophysics, Observatory of GenevaUniversity of GenevaVersoixSwitzerland 122.Physik-InstitutUniversität ZürichZürichSwitzerland 123.Dept. of PhysicsDurham UniversityDurhamUK 124.Centre for Advanced Instrumentation, Dept. of PhysicsDurham UniversityDurhamUK 125.School of Physics & AstronomyUniversity of LeedsLeedsUK 126.Department of Physics and AstronomyUniversity of LeicesterLeicesterUK 127.Oliver Lodge LaboratoryUniversity of LiverpoolLiverpoolUK 128.School of Physics and AstronomyUniversity of NottinghamNottinghamUK 129.School of Physics & AstronomyUniversity of EdinburghEdinburghUK 130.Department of Physics and AstronomyUniversity of SheffieldSheffieldUK 131.Centre for Astrophysics Research, Science & Technology Research InstituteUniversity of HertfordshireHatfieldUK 132.STFC Rutherford Appleton LaboratoryDidcotUK 133.School of Physics & AstronomyUniversity of SouthamptonSouthamptonUK 134.Department of PhysicsUniversity of OxfordOxfordUK 135.Argonne National LaboratoryArgonneUSA 136.Department of Physics and AstronomyIowa State UniversityAmesUSA 137.Department of PhysicsPittsburg State UnversityPittsburgUSA 138.Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator LaboratoryStanford UniversityStanfordUSA 139.Harvard-Smithsonian Center for AstrophysicsCambridgeUSA 140.University of Alabama in Huntsville - Center for Space Physics and Aeronomic ResearchHuntsvilleUSA 141.Department of Physics and AstronomyUniversity of CaliforniaLos AngelesUSA 142.Santa Cruz Institute for Particle Physics and Department of PhysicsUniversity of CaliforniaSanta CruzUSA 143.Enrico Fermi InstituteUniversity of ChicagoChicagoUSA 144.Department of Physics and AstronomyUniversity of UtahSalt Lake CityUSA 145.Department of PhysicsWashington UniversitySt. LouisUSA 146.Astronomy DepartmentAdler Planetarium and Astronomy MuseumChicagoUSA 147.Department of Astronomy & AstrophysicsPennsylvania State UniversityUniversity ParkUSA 148.Department of PhysicsPurdue UniversityWest LafayetteUSA 149.University of California, DavisDavisUSA 150.School of Physics and AstronomyUniversity of MinnesotaMinneapolisUSA 151.Dept. of Physics & Astronomy, Barnard CollegeColumbia UniversityNew YorkUSA 152.Department of Physics and AstronomyUniversity of IowaIowa CityUSA 153.Department of AstronomyYale UniversityNew HavenUSA 154.CTA Project Office, LandessternwarteUniversität HeidelbergHeidelbergGermany The CTA Consortium, Actis, M., Agnetta, G. et al. Exp Astron (2011) 32: 193. https://doi.org/10.1007/s10686-011-9247-0 Received 26 November 2010
CommonCrawl
viXra.org > Geometry Abstracts Authors Papers Full Site Previous months: 2010 - 1003(11) - 1004(4) - 1005(2) - 1006(7) - 1007(2) - 1008(5) - 1009(7) - 1010(5) - 1011(1) 2011 - 1101(4) - 1102(3) - 1103(6) - 1104(18) - 1106(2) - 1107(4) - 1108(1) - 1110(2) - 1111(1) 2012 - 1201(2) - 1202(1) - 1203(2) - 1204(2) - 1205(3) - 1208(1) - 1209(1) - 1211(4) 2013 - 1301(1) - 1303(3) - 1304(1) - 1305(2) - 1306(9) - 1307(2) - 1308(2) - 1309(2) - 1310(2) - 1311(2) - 1312(5) 2014 - 1401(2) - 1404(3) - 1405(5) - 1406(2) - 1407(3) - 1408(4) - 1409(3) - 1410(2) - 1411(3) 2015 - 1502(5) - 1504(2) - 1507(2) - 1508(8) - 1510(1) - 1511(4) - 1512(4) 2016 - 1601(1) - 1602(6) - 1604(1) - 1605(4) - 1606(2) - 1607(26) - 1608(4) - 1609(2) - 1610(3) - 1611(2) - 1612(1) 2018 - 1801(7) - 1802(8) - 1803(4) - 1804(5) - 1805(1) - 1806(1) - 1807(3) - 1808(3) - 1809(3) - 1810(8) - 1811(5) - 1812(6) 2019 - 1901(4) - 1902(5) - 1903(9) - 1904(8) - 1905(8) - 1906(5) Any replacements are listed farther down [370] viXra:1906.0404 [pdf] submitted on 2019-06-20 19:17:33 Via Geometric Algebra: Direction and Distance Between Two Points on a Spherical Earth Authors: James A. Smith Comments: 17 Pages. As a high-school-level example of solving a problem via Geometric (Clifford) Algebra, we show how to calculate the distance and direction between two points on Earth, given the locations' latitudes and longitudes. We validate the results by comparing them to those obtained from online calculators. This example invites a discussion of the benefits of teaching spherical trigonometry (the usual way of solving such problems) at the high-school level versus teaching how to use Geometric Algebra for the same purpose. Category: Geometry A Trigonometric Proof of Oppenheim's and Pedoe Inequality Authors: Israel Meireles Chrisostomo Comments: 7 Pages. This problem first appeared in the American Mathematical Monthly in 1965, proposed by Sir Alexander Oppenheim. As a matter of curiosity, the American Mathematical Monthly is the most widely read mathematics journal in the world. On the other hand, Oppenheim was a brilliant mathematician, and for the excellence of his work in mathematics, obtained the title of " Sir ", given by the English to English citizens who stand out in the national and international scenario.Oppenheim is better known in the academic world for his contribution to the field of Number Theory, known as the Oppenheim Conjecture. A Trigonometric Proof of Oppenheim's Inequality The SUSY Non-Commutativ Geometry Authors: Antoine Balan Comments: 1 page, written in english We define the notion of SUSY non-commutativ geometry as a supersymmetric theory of quantum spaces. Relation Between Mean Proportionals of Parts and the Whole of a Line Segment Authors: Radhakrishnamurty Padyala Comments: 4 Pages. 4 Galileo derived a result for the relation between the two mean proportionals of the parts and the whole of a given line segment. He derived it for the internal division of the line segment. We derive in this note, a corresponding result for the external division of a given line segment. About One Geometric Variation Problem Authors: Emanuels Grinbergs Comments: 13 Pages. Translated from Latvian by Dainis Zeps Translation of the article of Emanuels Grinbergs, ОБ ОДНОЙ ГЕОМЕТРИЧЕСКОЙ ВАРИАЦИОННОЙ ЗАДАЧЕ that is published in LVU Zinātniskie darbi, 1958.g., sējums XX, izlaidums 3, 153.-164., in Russian https://dspace.lu.lv/dspace/handle/7/46617. From Pythagorean Theorem to Cosine Theorem. Authors: Jesús Álvarez Lobo Easy and natural demonstration of the cosine theorem, based on the extension of the Pythagorean theorem. Solution of a Vector-Triangle Problem Via Geometric (Clifford) Algebra As a high-school-level application of Geometric Algebra (GA), we show how to solve a simple vector-triangle problem. Our method highlights the use of outer products and inverses of bivectors. The Flow of Chern We propose a flow over a Kaehler manifold, called the Chern flow. Three, Four and N-Dimensional Swastikas & their Projections Authors: Sascha Vongehr Comments: 10 pages, Six Figures, Keywords: Higher Dimensional Geometry; Hyper Swastika; Reclaiming of Symbols; Didactic Arts Difficulties with generalizing the swastika shape for N dimensional spaces are discussed. While distilling the crucial general characteristics such as whether the number of arms is 2^N or 2N, a three dimensional (3D) swastika is introduced and then a construction algorithm for any natural number N so that it reproduces the 1D, 2D, and 3D shapes. The 4D hyper swastika and surfaces in its hypercube envelope are then presented for the first time. Via Geometric (Clifford) Algebra: Equation for Line of Intersection of Two Planes As a high-school-level example of solving a problem via Geometric Algebra (GA), we show how to derive an equation for the line of intersection between two given planes. The solution method that we use emphasizes GA's capabilities for expressing and manipulating projections and rotations of vectors. Foundations of Conic Conformal Geometric Algebra and Simplified Versors for Rotation, Translation and Scaling Authors: Eckhard Hitzer, Stephen J. Sangwine Comments: 15 Pages. submitted to Topical Collection of Adv. in Appl. Clifford Algebras, for Proceedings of FTHD 2018, 21 Feb. 2019, 1 table, 1 figure. This paper explains in algebraic detail how two-dimensional conics can be defined by the outer products of conformal geometric algebra (CGA) points in higher dimensions. These multivector expressions code all types of conics in arbitrary scale, location and orientation. Conformal geometric algebra of two-dimensional Euclidean geometry is fully embedded as an algebraic subset. With small model preserving modifications, it is possible to consistently define in conic CGA versors for rotation, translation and scaling, similar to [https://doi.org/10.1007/s00006-018-0879-2], but simpler, especially for translations. Keywords: Clifford algebra, conformal geometric algebra, conics, versors. Mathematics Subject Classification (2010). Primary 15A66; Secondary 11E88, 15A15, 15A09. Cubic Curves and Cubic Surfaces from Contact Points in Conformal Geometric Algebra Authors: Eckhard Hitzer, Dietmar Hildenbrand Comments: 11 Pages. accepted for M. Gavrilova et al (eds.), Proceedings of Workshop ENGAGE 2019 at CGI 2019 with Springer LNCS, April 2019, 1 table. This work explains how to extend standard conformal geometric algebra of the Euclidean plane in a novel way to describe cubic curves in the Euclidean plane from nine contact points or from the ten coefficients of their implicit equations. As algebraic framework serves the Clifford algebra Cl(9,7) over the real sixteen dimensional vector space R^{9,7}. These cubic curves can be intersected using the outer product based meet operation of geometric algebra. An analogous approach is explained for the description and operation with cubic surfaces in three Euclidean dimensions, using as framework Cl(19,16). Keywords: Clifford algebra, conformal geometric algebra, cubic curves, cubic surfaces, intersections The Flow of Hermite-Ricci Comments: 1 page, written in french We define a flow for hermitian manifolds. We call it the Hermite-Ricci flow. Expanding Polynomials with Regular Polygons Authors: Timothy W. Jones Expanding the root form of a polynomial for large numbers of roots can be complicated. Such polynomials can be used to prove the irrationality of powers of pi, so a technique for arriving at expanded forms is needed. We show here how roots of polynomials can generate regular polygons whose vertices considered as roots form known expanded polynomials. The product of these polynomials can be simple enough to yield the desired expanded form. Computing a Well-Connected Midsurface Authors: Yogesh H. Kulkarni, Anil D. Sahasrabudhe, Muknd S. Kale Computer-aided Design (CAD) models of thin-walled parts such as sheet metal or plastics are often reduced dimensionally to their corresponding midsurfaces for quicker and fairly accurate results of Computer-aided Engineering (CAE) analysis. Generation of the midsurface is still a time-consuming and mostly, a manual task due to lack of robust and automated techniques. Midsurface failures manifest in the form of gaps, overlaps, not-lying-halfway, etc., which can take hours or even days to correct. Most of the existing techniques work on the complex final shape of the model forcing the usage of hard-coded heuristic rules, developed on a case-by-case basis. The research presented here proposes to address these problems by leveraging feature-parameters, made available by the modern feature-based CAD applications, and by effectively leveraging them for sub-processes such as simplification, abstraction and decomposition. In the proposed system, at first, features which are not part of the gross shape are removed from the input sheet metal feature-based CAD model. Features of the gross-shape model are then transformed into their corresponding generic feature equivalents, each having a profile and a guide curve. The abstracted model is then decomposed into non-overlapping cellular bodies. The cells are classified into midsurface-patch generating cells, called 'solid cells' and patch-connecting cells, called 'interface cells'. In solid cells, midsurface patches are generated either by offset or by sweeping the midcurve generated from the owner-feature's profile. Interface cells join all the midsurface patches incident upon them. Output midsurface is then validated for correctness. At the end, real-life parts are used to demonstrate the efficacy of the approach. The Ricci Flow for Connections We define a natural Ricci flow for connections over a Riemannian manifold. A Special Geometry - and its Consequences Authors: Ulrich E. Bruchholz It is explained why the geometry of space-time, first found by Rainich, is generally valid. The equations of this geometry, the known Einstein-Maxwell equations, are discussed, and results are listed. We shall see how these tensor equations can be solved. As well, neutrosophics is more supported than dialectics. We shall find even more categories than described in neutrosophics. A Note on the Arbelos in Wasan Geometry, Satoh's Problem and a Circle Pattern Authors: Hiroshi Okumura We generalize a problem in Wasan geometry involving an arbelos, and construct a self-similar circle pattern. Dos Cuestiones de Geometría Authors: Edgar Valdebenito En esta nota mostramos dos cuestiones elementales de geometría. A Closed 3-Form in Spinorial Geometry Comments: 2 pages, written in english We define a 3-form over a spinorial manifold by mean of the curvature tensor and the Clifford multiplication. Division by Zero Calculus in Trigonometric Functions Authors: Saburou Saitoh Comments: 19 Pages. In this paper, we will introduce the division by zero calculus in triangles and trigonometric functions as the first stage in order to see the elementary properties. In this paper, we will introduce the division by zero calculus in triangles and trigonometric functions as the first stage in order to see the elementary properties. Refutation of Euclidean Geometry Embedded in Hyperbolic Geometry for Equal Consistency Authors: Colin James III Comments: 2 Pages. © Copyright 2019 by Colin James III All rights reserved. Respond to author by email only: info@cec-services dot com. See updated abstract at ersatz-systems.com. (We warn troll Mikko at Disqus to read the article four times before hormonal typing.) The following conjecture is refuted: "[An] n-dimensional Euclidean geometry can be embedded into (n+1)-dimensional hyperbolic non Euclidean geometry. Therefore hyperbolic non Euclidean geometry and Euclidean geometry are equally consistent, that is, either both are consistent or both are inconsistent." Hence, the conjecture is a non tautologous fragment of the universal logic VŁ4. Denial of Consistency for the Lobachevskii Non Euclidean Geometry We prove two parallel lines are tautologous in Euclidean geometry. We next prove that non Euclidean geometry of Lobachevskii is not tautologous and hence not consistent. What follows is that Riemann geometry is the same, and non Euclidean geometry is a segment of Euclidean geometry, not the other way around. Therefore non Euclidean geometries are a non tautologous fragment of the universal logic VŁ4. A New Tensor in Differential Geometry We propose a 3-form in differential geometry which depends only of a connection over the tangent fiber bundle. A Note of Differential Geometry Authors: Abdelmajid Ben Hadj Salem In this note, we give an application of the Method of the Repère Mobile to the Ellipsoid of Reference in Geodesy using a symplectic approach. Refutation of Riemannian Geometry as Generalization of Euclidean Geometry From the classical logic section on set theory, we evaluate definitions of the atom and primitive set. None is tautologous. From the quantum logic and topology section on set theory, we evaluate the disjoint union (as equivalent to the XOR operator) and variances in equivalents for the AND and OR operators. None is tautologous. This reiterates that set theory and quantum logic are not bivalent, and hence non-tautologous segments of the universal logic VŁ4. The assertion of Riemannian geometry as generalization of Euclidean geometry is not supported. Proceedings on Non Commutative Geometry. Authors: Johan Noldus Non commutative geometry is developed from the point of view of an extension of quantum logic. We provide for an example of a non-abelian simplex as well as a non-abelian curved Riemannian space. Solution Proof of Bellman's Lost in the Forest Problem for Triangles From the area and dimensions of an outer triangle, the height point of an inner triangle implies the minimum distance to the outer triangle. This proves the solution of Bellman's Lost in the forest problem for triangles. By extension, it is the general solution proof for other figures. Cluster Packaging of Spheres Versus Linear Packaging of Spheres Authors: Helmut Söllinger Comments: 10 Pages. language: German The paper analyses the issue of optimised packaging of spheres of the same size. The question is whether a linear packaging of spheres in the shape of a sausage or a spatial cluster of spheres can minimise the volume enveloping the spheres. There is an assumption that for less than 56 spheres the linear packaging is denser and for 56 spheres the cluster is denser, but the question remains how a cluster of 56 spheres could look like. The paper shows two possible ways to build such a cluster of 56 spheres. The author finds clusters of 59, 62, 65, 66, 68, 69, 71, 72, 74, 75, 76, 77, 78, 79 and 80 spheres - using the same method - which are denser than a linear packaging of the same number and gets to the assumption that all convex clusters of spheres of sufficient size are denser than linear ones. On the Equivalence of Closed Figures and Infinitely Extended Lines and the Conclusions Drawn from it Authors: Madhur Sorout The equivalence of closed figures and infinitely extended lines may lead us to understand the physical reality of infinities. This paper doesn't include what infinities mean in the physical world, but the paper is mainly focused on the equivalence of closed figures and infinitely extended lines. Using this principle, some major conclusions can be drawn. The equivalence of closed figures and infinitely extended lines is mainly based on the idea that closed figures and infinitely extended lines are equivalent. One of the most significant conclusions drawn from this equivalency is that if any object moves along a straight infinitely extended line, it will return back to the point, where it started to move, after some definite time. Three-Dimensional Quadrics in Conformal Geometric Algebras and Their Versor Transformations Authors: Eckhard Hitzer Comments: 15 Pages. Submitted to Topical Collection of Adv. in Appl. Clifford Algebras, for Proceedings of AGACSE 2018, 23 Feb. 2019. This work explains how three dimensional quadrics can be defined by the outer products of conformal geometric algebra points in higher dimensions. These multivector expressions code all types of quadrics in arbitrary scale, location and orientation. Furthermore a newly modified (compared to Breuils et al, 2018, https://doi. org/10.1007/s00006-018-0851-1.) approach now allows not only the use of the standard intersection operations, but also of versor operators (scaling, rotation, translation). The new algebraic form of the theory will be explained in detail. The G-connections We define the notion of G-connections over vector fiber bundles with action of a Lie group G. The Symplectic Laplacian We construct a symplectic Laplacian which is a differential operator of order 1 depending only on a connection and a symplectic form. A Generalized Clifford Algebra We propose a generalization of the Clifford algebra. We give application to the Dirac operator. Blow-up of Feuerbach's Theorem Authors: Alexander Skutin In this short note we introduce the blow-up of the Feuerbach's theorem. We define a closed 2-form for any spinorial manifold. We deduce characteristic classes. Archimedean Incircle of a Triangle We generalize several Archimedean circles, which are the incircles of special triangles. Fractales : La Geometría del Caos Esta nota muestra una colección de fractales. Elementary Fractals : Part V Comments: 108 Pages. This note presents a collection of elementary fractals. Solution of a Sangaku ``Tangency" Problem via Geometric Algebra Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to solve one of the beautiful \emph{sangaku} problems from 19th-Century Japan. Among the GA operations that prove useful is the rotation of vectors via the unit bivector i. Refutation of the Planar Eucleadian R-Geometry of Tarski Comments: 3 Pages. © Copyright 2016-2018 by Colin James III All rights reserved. Updated abstract at ersatz-systems.com . Respond to the author by email at: info@ersatz-systems dot com. We evaluate the axioms of the title. The axiom of identity of betweenness and axiom Euclid are tautologous, but the others are not. The commonplace expression of the axiom of Euclid does not match its other two variations which is troubling. This effectively refutes the planar R-geometry. Geometries of O Authors: Hannes Hutzelmeyer Geometries of O adhere to Ockham's principle of simplest possible ontology: the only individuals are points, there are no straight lines, circles, angles etc. , just as it was was laid down by Tarski in the 1920s, when he put forward a set of axioms that only contain two relations, quaternary congruence and ternary betweenness. However, relations are not as intuitive as functions when constructions are concerned. Therefore the planar geometries of O contain only functions and no relations to start with. Essentially three quaternary functions occur: appension for line-joining of two pairs of points, linisection representing intersection of straight lines and circulation corresponding to intersection of circles. Functions are strictly defined by composition of given ones only. Both, Euclid and Lobachevsky planar geometries are developed using a precise notation for object-language and metalanguage, that allows for a very broad area of mathematical systems up to theory of types. Some astonishing results are obtained, among them: (A) Based on a special triangle construction Euclid planar geometry can start with a less powerful ontological basis than Lobachevsky geometry. (B) Usual Lobachevsky planar geometry is not complete, there are nonstandard planar Lobachevsky geometries. One needs a further axiom, the 'smallest' system is produced by the proto-octomidial-axiom. (C) Real numbers can be abandoned in connection with planar geometry. A very promising conjecture is put forward stating that the Euclidean Klein-model of Lobachevsky planar geometry does not contain all points of the constructive Euclidean unit-circle. Elementary Fractals: Part IV Elementary Fractals: Part III Elementary Fractals: Part II Elementary Fractals: Part I The Arbelos in Wasan Geometry, Problems of Izumiya and Nait\=o We generalize two sangaku problems involving an arbelos proposed by Izumiya and Nait\=o, and show the existence of six non-Archimedean congruent circles. Possbile Cubic to Spherical Transfomaions Authors: Adham Ahmed Mohamed Ahmed Comments: 1 Page. this paper talks about a hypothesis between the cube and the sphere which is inside the cube and the excess volume of the cube than the sphere and the excess volume of the sphere where the cube is inside of the sphere What If you spin a cube around an axis passing through its midpoint of the cube would the cylinder formed have an excess in volume than the sphere equal to the excess in volume of the cylinder than he cube? Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Authors: Martin Peter Neuenhofen We present an NLP solver for nonlinear optimization with quadratic penalty terms and logarithmic barrier terms. The method is suitable for large sparse problems. Each iteration has a polynomial time-complexity. The method has global convergence and local quadratic convergence, with a convergence radius that depends little on our method but rather on the geometry of the problem. От ошибки Гильберта к исчислению сфер Authors: Франц Герман Что такое след проективной плоскости и как можно его увидеть рассказывается в этой заметке. Exact and Intuitive Geometry Explanation: Why Does a Half-angle-rotation in Spin or Quaternion Space Correspond to the Whole-angle-rotation in Normal 3D Space? Authors: Hongbing Zhang Comments: 18 Pages. Please Indicate This Source From Hongbing Zhang When Cite the Contents in Works of Sience or Popular Sience Why does a half-angle-rotation in quaternion space or spin space correspond to a whole-angle-rotation in normal 3D space? The question is equivalent to why a half angle in the representation of SU(2) corresponds to a whole angle in the representation of SO(3). Usually we use the computation of the abstract mathematics to answer the question. But now I will give an exact and intuitive geometry-explanation on it in this paper. К вопросу о представлении дельта-функции Дирака Comments: Pages. В данной заметке мы покажем представление дельта-функции Дирака, которое будем назвать естественным. Существующие способы представления дельта-функции Дирака носят в общем-то искусственный характер. Smoothing using Geodesic Averages Authors: Jan Hakenberg Geodesic averages have been used to generalize curve subdivision and Bézier curves to Riemannian manifolds and Lie groups. We show that geodesic averages are suitable to perform smoothing of sequences of data in nonlinear spaces. In applications that produce temporal uniformly sampled manifold data, the smoothing removes high-frequency components from the signal. As a consequence, discrete differences computed from the smoothed sequence are more regular. Our method is therefore a simpler alternative to the extended Kalman filter. We apply the smoothing technique to noisy localization estimates of mobile robots. The Seiber-Witten Equations for Vector Fields By similarity with the Seiberg-Witten equations, we propose a set of two equations, depending of a spinor and a vector field. An Introduction to Non-Commutative (Pseudo) Metric Geometry. We introduce the reader to the problematic aspects of formulating in concreto a suitable notion of geometry. Here, we take the canonical approach and give some examples. Теорема о поляре треугольника Сформулирована и доказана теорема, ранее не встречавшаяся в литературе по проективной геометрии. На основании «теоремы о поляре трёхвершинника» открывается целый класс задач на построение. Теорема может быть полезна студентам математических факультетов педагогических вузов, а также учителям математики средней школы для проведения факультативных занятий. Fractales Del Tipo Newton Asociados al Polinomio: P(z)=z^9+3z^6+3z^3-1,z Complejo. En esta nota mostramos algunos fractales del tipo Newton asociados al polinomio: p(z)=z^9+3z^6+3z^3-1,z complejo. A Generalization of the Levi-Civita Connection We define here a generalization of the well-know Levi-Civita connection. We choose an automorphism and define a connection with help of a (non-symmetric) bilinear form. A Generalization of the Clifford Algebra We propose here a generalization of the Clifford algebra by mean of two endomorphisms. We deduce a generalized Lichnerowicz formula for the space of modified spinors. Definition of the Term "DIRECTION" Определение термина «НАПРАВЛЕНИЕ» Authors: Somsikov A.I. One of initial or primary concepts which is considered to the "protozoa" (who aren't expressed through other concepts) is considered. The structure of this concept is revealed. Algebraic and geometrical consequences are found. Рассмотрено одно из исходных или первичных понятий, считающееся «простейшим» (не выражаемым через другие понятия). Выявлена структура этого понятия. Найдены алгебраические и геометрические следствия. (Ip-GP Version 1.0 15 Pages 14.08.2018) on the Intrinsic Paradox of the Geometric Point Definition (Solved Using the Included Middle Logic) as the Main Cause of Euclid's Postulate "inaccuracy", Allowing the Existence not Only of Non-Euclidean Geomet Authors: Andrei Lucian Dragoi This paper brings to attention the intrinsic paradox of the geometric point (GP) definition, a paradox solved in this paper by using Stéphane Lupasco's Included Middle Logic (IML) (which was stated by Basarab Nicolescu as one of the three pillars of transdisciplinarity [TD]) and its extended form: based on IML, a new "t-metamathematics" (tMM) (including a t-metageometry[tMG]) is proposed, which may explain the main cause of Euclid's parallel postulate (EPP) "inaccuracy", allowing the existence not only of non-Euclidean geometries (nEGs), but also the existence of new EPP variants. tMM has far-reaching implications, including the help in redefining the basics of Einstein's General relativity theory (GRT), quantum field theory (QFT), superstring theories (SSTs) and M-theory (MT). KEYWORDS (including a list of main abbreviations): geometric point (GP); Stéphane Lupasco's Included Middle Logic (IML); Basarab Nicolescu, transdisciplinarity (TD); "t-metamathematics" (tMM); t-metageometry (tMG); Euclid's parallel postulate (EPP); non-Euclidean geometries; new EPP variants; Einstein's General relativity theory (GRT); quantum field theory (QFT); superstring theories (SSTs); M-theory (MT); The Balan-Killing Manifolds We define here the notion of Balan-Killing manifolds which are solutions of differential equations over the metrics of spin manifolds. Curve Subdivision in SE(2) We demonstrate that curve subdivision in the special Euclidean group SE(2) allows the design of planar curves with favorable curvature. We state the non-linear formula to position a point along a geodesic in SE(2). Curve subdivision in the Lie group consists of trigonometric functions. When projected to the plane, the refinement method reproduces circles and straight lines. The limit curves are designed by intuitive placement of control points in SE(2). Study of Transformations Authors: Yeray Cachón Santana This paper covers a first approach study of the angles and modulo of vectors in spaces of Hilbert considering a riemannian metric where, instead of taking the usual scalar product on space of Hilbert, this will be extended by the tensor of the geometry g. As far as I know, there is no a study covering space of Hilbert with riemannian metric. It will be shown how to get the angle and modulo on Hilbert spaces with a tensor metric, as well as vector product, symmetry and rotations. A section of variationals shows a system of differential equations for a riemennian metric. Making Sense of Bivector Addition As a demonstration of the coherence of Geometric Algebra's (GA's) geometric and algebraic concepts of bivectors, we add three geometric bivectors according to the procedure described by Hestenes and Macdonald, then use bivector identities to determine, from the result, two bivectors whose outer product is equal to the initial sum. In this way, we show that the procedure that GA's inventors dened for adding geometric bivectors is precisely that which is needed to give results that coincide with those obtained by calculating outer products of vectors that are expressed in terms of a 3D basis. We explain that that accomplishment is no coincidence: it is a consequence of the attributes that GA's designers assigned (or didn't) to bivectors. The Dirac Operators for Lie Groups In the case of a manifold which is a Lie group, a Dirac operator can be defined acting over the vector fields of the Lie group instead of the spinors. On Surface Measures on Convex Bodies and Generalizations of Known Tangential Identities Authors: Johan Aspegren One theme of this paper is to extend known results from polygons and balls to the general convex bodies in n− dimensions. An another theme stems from approximating a convex surface with polytope surface. Our result gives a sufficient and necessary condition for an natural approximation method to succeed (in principle) in the case of surfaces of convex bodies. Thus, Schwartz`s paradox does not affect our method. This allows us to denefine certain surface measures on surfaces of convex bodies in a novel and simple way. Vortex Equation in Holomorphic Line Bundle Over Non-Compact Gauduchon Manifold Authors: Zhenghan Shen, Wen Wang, Pan Zhang In this paper, by the method of heat flow and the method of exhaustion, we prove an existence theorem of Hermitian-Yang-Mills-Higgs metrics on holomorphic line bundle over a class of non-compact Gauduchon manifold. Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to calculate Solar azimuths and altitudes as a function of time via the heliocentric model. We begin by representing the Earth's motions in GA terms. Our representation incorporates an estimate of the time at which the Earth would have reached perihelion in 2017 if not affected by the Moon's gravity. Using the geometry of the December 2016 solstice as a starting point, we then employ GA's capacities for handling rotations to determine the orientation of a gnomon at any given latitude and longitude during the period between the December solstices of 2016 and 2017. Subsequently, we derive equations for two angles: that between the Sun's rays and the gnomon's shaft, and that between the gnomon's shadow and the direction ``north" as traced on the ground at the gnomon's location. To validate our equations, we convert those angles to Solar azimuths and altitudes for comparison with simulations made by the program Stellarium. As further validation, we analyze our equations algebraically to predict (for example) the precise timings and locations of sunrises, sunsets, and Solar zeniths on the solstices and equinoxes. We emphasize that the accuracy of the results is only to be expected, given the high accuracy of the heliocentric model itself, and that the relevance of this work is the efficiency with which that model can be implemented via GA for teaching at the introductory level. On that point, comments and debate are encouraged and welcome. A Note on a Problem in Misho Sampo Comments: 2 Pages. This paper will be submitted to Sangaku Journal of Mathematics. A problem involving an isosceles triangle with a square and three congruent circles is generalized. Euclid's Geometry is Just in Our Mind, Rather Than Describing the Real World Authors: Arturo Tozzi, James Peters The first definition (prior to the well-known five postulates) of Euclid describes the point as "that of which there is no part". Here we show how the Euclidean account of manifolds is untenable in our physical realm and that the concepts of points, lines, surfaces, volumes need to be revisited, in order to allow us to be able to describe the real world. Here we show that the basic object in a physical context is a traversal of spacetime via tiny subregions of spatial regions, rather than the Euclidean point. We also elucidate the psychological issues that lead our mind to think to points and lines as really existing in our surrounding environment. The Radian's Coincidence Conjecture Authors: Ryan Haddad This conjecture may be a tool in defining the indefinite tangent of 90 degrees, and is a (new) mathematical coincidence that is indeed strange; why would the tangent of angles near 90 degrees be equal to the angle of the radian multiplied by powers of 10? In fact, if there is no geometrical explanation in current mathematics, it may resides in metamathematics. Toroidal Approach to the Doubling of the Cube Authors: Gerasimos T. Soldatos Comments: Published in: FORUM GEOMETRICORUM, VOL. 18, PAGES 93-97 A doubling of the cube is attempted as a problem equivalent to the doubling of a horn torus. Both doublings are attained through the circle of Apollonius. Proof of Playfair's Axiom Hits a Roadblock Authors: Prashanth R. Rao The Playfair's axiom is considered an equivalent of Euclid's fifth postulate or parallel postulate in Euclidean planar geometry. It states that in a given plane, with a line in the plane and a point outside the line that is also in the same plane, one and only one line passes through that point that is also parallel to the given line. Previous proofs of Euclid's postulate or the Playfair's axiom have unintentionally assumed parallel postulate to prove it. Also, these axioms have different results in hyperbolic and spherical geometries. We offer proof for the Playfair's axiom for subset of cases in the context of plane Euclidean geometry and describe another subset of cases that cannot be proven by the same approach. The Flow of Dirac-Ricci Comments: 2 pages, written in french Following the definition of the flow of Ricci, we construct a flow of hermitian metrics for the spinors fiber bundle. The Flow of Ricci over an Hermitian Fiber Bundle The flow of Ricci is defined for the hermitian metric of a fiber bundle. A Note on a Problem Invoving a Square in a Curvilinear Triangle Comments: 3 Pages. This is a paper considering a problem in Wasan geometry. A problem involving a square in the curvilinear triangle made by two touching congruent circles and their common tangent is generalized. Extension of Proposition 23 from Euclid's Elements Proposition 23 states that two parallel lines in a plane never intersect. We use this definition with first and second postulate of Euclid to prove that two distinct lines through a single point cannot be parallel. A Relationship Between a Cevian, Two Perpendicular Bisectors and a Median in an Isosceles Triangle. Comments: 1 Page. Revista Escolar de la Olimpiada Iberoamericana de Matemática. Volume 18. Spanish. A very simple solution to a geometric problem (proposed by Alex Sierra Cardenas, Medellin, Colombia) that involves a cevian, two perpendicular bisectors and a median in an isosceles triangle. Geometry Beyond Algebra. the Theorem of Overlapped Polynomials (Top) Andits Application to the Sawa Masayoshi\'s Sangaku Problem. the Adventure Ofsolving a Mathematical Challenge Stated in 1821. Comments: 47 Pages. https://arxiv.org/abs/1110.1299 This work presents for the first time a solution to the 1821 unsolved Sawa Masayoshi's problem, giving an explicit and algebraically exact solution for the symmetric case (particular case b = c, i.e., for ABC isosceles right-angled triangle), see (1.60) and (1.61). Despite the isosceles triangle restriction is not necessary, in view of the complexity of the explicit algebraic solution for the symmetric case, one can guessing the impossibility of achieving an explicit relationship for the asymmetric case (the more general case: ABC right-angled scalene triangle). For this case is given a proof of existence and uniqueness of solution and a proof of the impossibility of getting such a relationship, even implicitly, if the sextic equation (2.54) it isn't solvable. Nevertheless, in (2.56) - (2.58) it is shown the way to solve the asymmetric case under the condition that (2.54) be solvable. Furthermore, it is proved that with a slight modification in the final set of variables (F), it is still possible to establish a relation between them, see (2.59) and (2.61), which provides a bridge that connects the primitive relationship by means of numerical methods, for every given right-angled triangle ABC. And as the attempt to solve Fermat's conjecture (or Fermat's last theorem), culminated more than three centuries later by Andrew Wiles, led to the development of powerful theories of more general scope, the attempt to solve the Masayoshi's problem has led to the development of the Theory of Overlapping Polynomials (TOP), whose application to this problem reveals a great potential that might be extrapolated to other frameworks. Wasan Geometry: 3-Isoincircles Problem. Trigonometric Analysis of a Hard Sangaku Challenge. Sacred Mathematics: Japanese Temple Geometry. Fukagawa Hidetoshi - Tony Rothman. Still Harder Temple Geometry Problems: Chapter 6 - Problem 3. La Représentation de Bonne Comments: 13 Pages. In French. This paper gives the elements of definition of the Bonne's map projection. It was used for the ancient cartography at 1/50000 scale in Tunisia and Algeria. Calculating the Angle Between Projections of Vectors Via Geometric (Clifford) Algebra We express a problem from visual astronomy in terms of Geometric (Clifford) Algebra, then solve the problem by deriving expressions for the sine and cosine of the angle between projections of two vectors upon a plane. Geometric Algebra enables us to do so without deriving expressions for the projections themselves. The Flow of HyperKaehler-Ricci Studying the flow of Kaehler-Ricci, a flow is defined for a manifold which is HyperKaehler. The flow of Ricci-Schrödinger The flow of Ricci-Schrödinger is defined from the flow of Ricci, like the Schrödinger equation is a twist of the heat equation. Remarks on Liouville-Type Theorems on Complete Noncompact Finsler Manifolds Authors: Songting Yin, Pan Zhang In this paper, we give a gradient estimate of positive solution to the equation $$\Delta u=-\lambda^2u, \ \ \lambda\geq 0$$ on a complete non-compact Finsler manifold. Then we obtain the corresponding Liouville-type theorem and Harnack inequality for the solution. Moreover, on a complete non-compact Finsler manifold we also prove a Liouville-type theorem for a $C^2$-nonegative function $f$ satisfying $$\Delta f\geq cf^d, c>0, d>1, $$ which improves a result obtained by Yin and He. An Upper Bound for Lebesgue's Universal Covering Problem Authors: Philip Gibbs The universal covering problem as posed by Henri Lebesgue in 1914 seeks to find the convex planar shape of smallest area that contains a subset congruent to any point set of unit diameter in the Euclidean plane. Methods used previously to construct such a cover can be refined and extended to provide an improved upper bound for the optimal area. An upper bound of 0.8440935944 is found. A Remark on the Localization Formulas About Two Killing Vector Fields Authors: Xu Chen In this article, we will discuss a localization formulas of equlvariant cohomology about two Killing vector fields on the set of zero points ${\rm{Zero}}(X_{M}-\sqrt{-1}Y_{M})=\{x\in M \mid |Y_{M}(x)|=|X_{M}(x)|=0 \}.$ As application, we use it to get formulas about characteristic numbers and to get a Duistermaat-Heckman type formula on symplectic manifold. A Poincaré-Hopf Type Formula for A Pair of Vector Fields We extend the reslut about Poincar\'e-Hopf type formula for the difference of the Chern character numbers (cf.[3]) to the non-isolated singularities, and establish a Poincar\'e-Hopf type formula for a pair of vector field with the function $h^{T_{\mathbb{C}}M}(\cdot,\cdot)$ has non-isolated zero points over a closed, oriented smooth manifold of dimension $2n$. Sennimalai Kalimuthu Publications Authors: Sennimalai Kalimuthu Comments: 03 Pages. Interested people may contact k,me at any time. Tha The 5th Euclidean postulate is 2300 years old mathematical impossibility. I have worked on this problem for nearly b35 years and found a number of consistent solutions. My findings have been appeared in international peer reviewed research journals. Generation of power freely from space, space Bombs, Lion's Tonic and Lemurian Yoga are my ambitious scientific projects. Interested researchers and people may contact me at +91 8508991577. My email is [email protected] and [email protected] Poliedros Fórmulas Indemostradas Authors: Carlos Alejandro Chiappini Leonhard Euler demostró que en un poliedro regular convexo hay tres números ue cumplen una ley, expresada en una ecuación conocida como fórmula de Euler. Son el número de caras, el número de vértices y el número de aristas. Este documento presenta algunas fórmulas más, obtenidas por ensayo y error a partir de una tabla que contiene los datos de los 5 poliedros regulares convexos. Estas fórmulas indemostradas tienen rasgos verosímiles. Buscar el modo de demostrar la invalidez o la validez de esas fórmulas podría ser, para las personas amantes de la topología, una tarea interesante. Projection of a Vector upon a Plane from an Arbitrary Angle, via Geometric (Clifford) Algebra We show how to calculate the projection of a vector, from an arbitrary direction, upon a given plane whose orientation is characterized by its normal vector, and by a bivector to which the plane is parallel. The resulting solutions are tested by means of an interactive GeoGebra construction. Formulas and Spreadsheets for Simple, Composite, and Complex Rotations of Vectors and Bivectors in Geometric (Clifford) Algebra Comments: 29 Pages. Formulas and Spreadsheets for Simple, Composite, and Complex Rotations of Vectors and Bivectors in Geometric (Clifford) Algebra We show how to express the representations of single, composite, and ``rotated" rotations in GA terms that allow rotations to be calculated conveniently via spreadsheets. Worked examples include rotation of a single vector by a bivector angle; rotation of a vector about an axis; composite rotation of a vector; rotation of a bivector; and the ``rotation of a rotation". Spreadsheets for doing the calculations are made available via live links. Replacements of recent Submissions [102] viXra:1905.0026 [pdf] replaced on 2019-05-11 06:44:00 Comments: 11 Pages. accepted for M. Gavrilova et al (eds.), Proceedings of Workshop ENGAGE 2019 at CGI 2019 with Springer LNCS, April 2019, 1 table, corrections: 03+11 May 2019. Comments: 11 Pages. accepted for M. Gavrilova et al (eds.), Proceedings of Workshop ENGAGE 2019 at CGI 2019 with Springer LNCS, April 2019, 1 table, correction: 03 May 2019. Direct Sum Decomposition of a Linear Vector Space Authors: Anamitra Palit The direct sum decomposition of a vector space has been explored to bring out a conflicting feature in the theory. We decompose a vector space using two subspaces. Keeping one subspace fixed we endeavor to replace the other by one which is not equal to the replaced subspace. Proceeding from such an effort we bring out the conflict. From certain considerations it is not possible to work out the replacement with an unequal subspace. From alternative considerations an unequal replacement is possible. [99] viXra:1903.0317 [pdf] replaced on 2019-03-18 18:09:25 This paper is mainly focused on the equivalence of closed figures and infinitely extended lines. Using this principle, some major conclusions can be drawn. The equivalence of closed figures and infinitely extended lines is mainly based on the idea that closed figures and infinitely extended lines are equivalent. One of the most significant conclusions drawn from this equivalency is that if any object moves along a straight infinitely extended line, it will return back to the point, where it started to move, after some definite time. This principle of equivalence of closed figures and infinitely extended lines may lead us to understand the physical reality of infinities. Comments: 16 Pages. published in Adv. of App. Cliff. Algs., 29:46, pp. 1-16, 2019. DOI: 10.1007/s00006-019-0964-1, 1 table. This work explains how three dimensional quadrics can be defined by the outer products of conformal geometric algebra points in higher dimensions. These multivector expressions code all types of quadrics in arbitrary scale, location and orientation. Furthermore, a newly modified (compared to Breuils et al, 2018, https://doi.org/10.1007/s00006-018-0851-1.) approach now allows not only the use of the standard intersection operations, but also of versor operators (scaling, rotation, translation). The new algebraic form of the theory will be explained in detail. Comments: Submitted to Topical Collection of Adv. in Appl. Clifford Algebras, for Proceedings of AGACSE 2018, 23 Feb. 2019, 15 pages. 4 errors corrected: 25 Feb. 2019. Proposition 4.1 corrected: 02 Mar. 2019. Comments: Submitted to Topical Collection of Adv. in Appl. Clifford Algebras, for Proceedings of AGACSE 2018, 23 Feb. 2019, 15 pages. 4 errors corrected: 25 Feb. 2019. This work explains how three dimensional quadrics can be defined by the outer products of conformal geometric algebra points in higher dimensions. These multivector expressions code all types of quadrics in arbitrary scale, location and orientation. Furthermore a newly modified (compared to Breuils et al, 2018, https://doi.org/10.1007/s00006-018-0851-1.) approach now allows not only the use of the standard intersection operations, but also of versor operators (scaling, rotation, translation). The new algebraic form of the theory will be explained in detail. The Curvature and Dimension of Non-Differentiable Surfaces Authors: Shawn Halayka The curvature of a surface can lead to fractional dimension. In this paper, the properties of the 2-sphere surface of a 3D ball and the 2.x-surface of a 3D fractal set are considered. Tessellation is used to approximate each surface, primarily because the 2.x-surface of a 3D fractal set is otherwise non-differentiable. Geometries of O adhere to Ockham's principle of simplest possible ontology: the only individuals are points, there are no straight lines, circles, angles etc. , just as it was was laid down by Tarski in the 1920s, when he put forward a set of axioms that only contain two relations, quaternary congruence and ternary betweenness. However, relations are not as intuitive as functions when constructions are concerned. Therefore the planar geometries of O contain only functions and no relations to start with. Essentially three quaternary functions occur: appension for line-joining of two pairs of points, linisection representing intersection of straight lines and circulation corresponding to intersection of circles. Functions are strictly defined by composition of given ones only. Both, Euclid and Lobachevsky planar geometries are developed using a precise notation for object-language and metalanguage, that allows for a very broad area of mathematical systems up to theory of types. Some astonishing results are obtained, among them: (A) Based on a special triangle construction Euclid planar geometry can start with a less powerful ontological basis than Lobachevsky geometry. (B) Usual Lobachevsky planar geometry is not complete, there are nonstandard planar Lobachevsky geometries. One needs a further axiom, the 'smallest' system is produced by the proto-octomidial- axiom. (C) Real numbers can be abandoned in connection with planar geometry. A very promising conjecture is put forward stating that the Euclidean Klein-model of Lobachevsky planar geometry does not contain all points of the constructive Euclidean unit-circle. The Seiberg-Witten Equations for Vector Fields By similarity with the Seiberg-Witten equations, we propose two differential equations, depending of a spinor and a vector field, instead of a connection. Good moduli spaces are espected as a consequence of commutativity. We define here the notion of Balan-Killing manifolds which are spin manifolds whose metrics verify a certain differential equation. We take our inspiration from the notion of Killing spinors. One theme of this paper is to extend known results from polygons and balls to the general convex bodies in n− dimensions. An another theme stems from approximating a convex surface with a polytope surface. Our result gives a sufficient and necessary condition for an natural approximation method to succeed (in principle) in the case of surfaces of convex bodies. Thus, Schwartz`s paradox does not affect our method. This allows us to define certain surface measures on surfaces of convex bodies in a novel and simple way. The Quaternionic Seiberg-Witten Equations We define here the Seiberg-Witten equations in the quaternionic case. We formulate some algebra of the Hamilton numbers and study geometric applications of the quaternions. Following the definition of the flow of Ricci and with help of the Dirac operator, we construct a flow of hermitian metrics for the spinors fiber bundle. The flow of Ricci is defined for the hermitian metrics of a complex fiber bundle. Comments: 3 Pages. The paper is considering a problem in Wasan geometry. The flow of Ricci-Schrödinger is defined from the Ricci flow, like the Schrödinger equation with respect to the heat equation. Estimation of the Earth's "Unperturbed" Perihelion from Times of Solstices and Equinoxes Published times of the Earth's perihelions do not refer to the perihelions of the orbit that the Earth would follow if unaffected by other bodies such as the Moon. To estimate the timing of that ``unperturbed" perihelion, we fit an unperturbed Kepler orbit to the timings of the year 2017's equinoxes and solstices. We find that the unperturbed 2017 perihelion, defined in that way, would occur 12.93 days after the December 2016 solstice. Using that result, calculated times of the year 2017's solstices and equinoxes differ from published values by less than five minutes. That degree of accuracy is sufficient for the intended use of the result. Contact - Disclaimer - Privacy - Funding
CommonCrawl
How to rigorously prove that this sequence of stochastic processes converges to a deterministic process? Assume that for each $n\in\mathbb{N}$, there's a stochastic function $f_n$ of type $\mathbb{R}^{m}\to\Delta\mathbb{R}^{m}$, and for each $x\in\mathbb{R}^{m}$, the distributions $\frac{f_n(x)-x}{\frac{1}{n}}$ will weakly converge as $n$ limits to $\infty$, s.t. the n'th distribution is about $O(\frac{1}{n^2})$ away from the limiting distribution w.r.t. the usual earthmover distance. Also, the function which maps a given starting point $x$ to its (rescaled) limiting distribution $\lim_{n\to\infty}\frac{f_n(x)-x}{\frac{1}{n}}$ is known to be Lipschitz. The particular thing I'm trying to prove is that, for any finite interval of time $[0,I]$, $\epsilon,\delta$, then in the $n\to\infty$ limit, all but $\delta$ of the trajectories you get from applying $f_n$ $In$ times to a starting point are $\epsilon$-close to the trajectory you get from the differential equation $\frac{dx}{dt}=\lim_{n\to\infty}\mathbb{E}\left[\frac{f_n(x)-x}{\frac{1}{n}}\right]$ (ie, in a tiny time interval, the point just moves deterministically to the average of its next positions, instead of moving randomly) Now, there's a strong informal intuition for why it would work out this way. Because the function which maps a given starting point $x$ to its (rescaled) limiting distribution $\lim_{n\to\infty}\frac{f_n(x)-x}{\frac{1}{n}}$ is Lipschitz, that means that for large $n$, it should be be possible to iterate the function a whole bunch of times in a row without the distribution on next-positions changing much. Splitting the application of the function into a deterministic "drift" term that goes to the average next position, and a random term with a mean of 0, the resulting distribution should look something like "sum up all the drift terms, and by Martingale Central Limit Theorem shenanigans, the random terms will mostly cancel themselves out and look like a Gaussian with the same covariance matrix." But since the "spread" of the Gaussian is about $\sqrt{\text{number of iterations}}\cdot\text{step-size}$, and the step size scales as $\frac{1}{\text{total number of iterations}}$, the "spread" after $n$ steps would be about $\sqrt{n}\cdot\frac{1}{n}=\frac{1}{\sqrt{n}}$ and so in the $n\to\infty$ limit, the Gaussian sharpens up to a single point, a dirac-delta distribution. The variance is decreasing too fast as $n$ ramps up, and so, in the limit, the differential equation is deterministic, not stochastic, and only the "drift" terms survive from the repeated application of $f_n$ for large $n$. The issue is, although the informal argument is quite clear, I don't know what theorems would be involved in rigorously proving this conjecture. The stochastic differential equation theorems I've looked at seem to primarily be about how to analyze stochastic differential equations once you already have them, not about rigorously showing that a given sequence of discrete stochastic processes have a certain SDE as their limit. A response like "if you prove this bound and that bound and this other bound on your stochastic function we can directly apply Theorem 3.24 from BlahBlah to show your desired result" would be ideal. stochastic-processes differential-equations stochastic-differential-equations asked Nov 26, 2022 at 22:37 Alex AppelAlex Appel $\begingroup$ What is $\Delta \mathbb R^m$? $\endgroup$ $\begingroup$ The space of probability distributions over m-dimensional euclidean space. $\endgroup$ – Alex Appel Nov 27, 2022 at 0:03 $\begingroup$ The rest of your question doesn't make sense, then. First of all the expression $f_n(x)-x$ is not defined as $f_n(x)$ is a measure and $x$ is a vector $\endgroup$ $\begingroup$ That's the probability distribution over points that is produced by sampling a point from the probability distribution $f_n(x)$ and subtracting the point $x$ from the resulting point. $\endgroup$ $\begingroup$ In that case, $f_n(x)$ is an $\mathbb R^m$ valued random variable. It doesn't take values in $\Delta \mathbb R^m$ $\endgroup$ I am guessing in "The particular thing I'm trying to prove is that,..." you are talking about the convergence of discrete generator to continuous one. The natural topology for these questions is Skorokhod. See here for some ideas and references/keywords: https://math.stackexchange.com/questions/4225750/continuous-limit-of-a-discrete-stochastic-process We denote by $P_n$ the transition kernel of $X^n$ given by $$P_nf(x) = \mathbb{E}\left[f\left(X_{t_1}^n\right)\left| X_0 = x\right.\right], $$ for functions $f$ of a suitable class (we will take that class to be $C^2\left(\left[0,1\right]\right)$). We will show that the discrete generators of $X^n$, $$\mathcal{L}_n = \frac{P - I}{\Delta t^n}$$ where $I$ is the identity operator (i.e. $If = f$ for any function), converges to $\mathcal{L}$. As mentioned there some good references are: Billingsley's Convergence of Probability Measures. Ethier and Kurtz's Markov Processes. If you are dealing with SPDEs, you can also check out the literature in Wong-Zakai type theorems. Thomas KojarThomas Kojar $\begingroup$ Thanks, this was extremely helpful! Tightness+"there can only be one limit point because the generators converge" sufficed quite nicely for my purposes, and I've got a few helpful references to chew on now. All I'm missing at this point is how to go back from the infinitesimal generator of the limiting stochastic process (known) to the actual differential equation for the limiting stochastic process, to show the limiting differential equation is what I conjectured it to be. $\endgroup$ $\begingroup$ Theorem 6.5 in Ethier and Kurtz was exactly what I was looking for. Problem resolved. $\endgroup$ Approximation of stochastic differential equations Limit theorem : reproduce a proof with an adaption from discrete to continuous time inhomogeneous Ornstein-Ulhenbeck process / invariant probability measure What's the role of commutation relations in stochastic mechanics? Show an SDE's solution has positive probability to visit every set in the state space
CommonCrawl